index int64 0 20.3k | text stringlengths 0 1.3M | year stringdate 1987-01-01 00:00:00 2024-01-01 00:00:00 | No stringlengths 1 4 |
|---|---|---|---|
5,900 | Fixed-Length Poisson MRF: Adding Dependencies to the Multinomial David I. Inouye Pradeep Ravikumar Inderjit S. Dhillon Department of Computer Science University of Texas at Austin {dinouye,pradeepr,inderjit}@cs.utexas.edu Abstract We propose a novel distribution that generalizes the Multinomial distribution to enable dependencies between dimensions. Our novel distribution is based on the parametric form of the Poisson MRF model [1] but is fundamentally different because of the domain restriction to a fixed-length vector like in a Multinomial where the number of trials is fixed or known. Thus, we propose the Fixed-Length Poisson MRF (LPMRF) distribution. We develop AIS sampling methods to estimate the likelihood and log partition function (i.e. the log normalizing constant), which was not developed for the Poisson MRF model. In addition, we propose novel mixture and topic models that use LPMRF as a base distribution and discuss the similarities and differences with previous topic models such as the recently proposed Admixture of Poisson MRFs [2]. We show the effectiveness of our LPMRF distribution over Multinomial models by evaluating the test set perplexity on a dataset of abstracts and Wikipedia. Qualitatively, we show that the positive dependencies discovered by LPMRF are interesting and intuitive. Finally, we show that our algorithms are fast and have good scaling (code available online). 1 Introduction & Related Work The Multinomial distribution seems to be a natural distribution for modeling count-valued data such as text documents. Indeed, most topic models such as PLSA [3], LDA [4] and numerous extensions—see [5] for a survey of probabilistic topic models—use the Multinomial as the fundamental base distribution while adding complexity using other latent variables. This is most likely due to the extreme simplicity of Multinomial parameter estimation—simple frequency counts—that is usually smoothed by the simple Dirichlet conjugate prior. In addition, because the Multinomial requires the length of a document to be fixed or pre-specified, usually a Poisson distribution on document length is assumed. This yields a Poisson-Multinomial distribution—which by well-known results is merely an independent Poisson model.1 However, the Multinomial assumes independence between the words because the Multinomial is merely the sum of independent categorical variables. This restriction does not seem to fit with real-world text. For example, words like “neural” and “network” will tend to co-occur quite frequently together in NIPS papers. Thus, we seek to relax the word independence assumption of the Multinomial. The Poisson MRF distribution (PMRF) [1] seems to be a potential replacement for the PoissonMultinomial because it allows some dependencies between words. The Poisson MRF is developed by assuming that every conditional distribution is 1D Poisson. However, the original formulation in [1] only allowed for negative dependencies. Thus, several modifications were proposed in [6] to allow for positive dependencies. One proposal, the Truncated Poisson MRF (TPMRF), simply truncated the PMRF by setting a max count for every word. While this formulation may provide 1The assumption of Poisson document length is not important for most topic models [4]. 1 interesting parameter estimates, a TPMRF with positive dependencies may be almost entirely concentrated at the corners of the joint distribution because of the quadratic term in the log probability (see the bottom left of Fig. 1). In addition, the log partition function of the TPMRF is intractable to estimate even for a small number of dimensions because the sum is over an exponential number of terms. Thus, we seek a different distribution than a TPMRF that allows positive dependencies but is more appropriately normalized. We observe that the Multinomial is proportional to an independent Poisson model with the domain restricted to a fixed length L. Thus, in a similar way, we propose a Fixed-Length Poisson MRF (LPMRF) that is proportional to a PMRF but is restricted to a domain with a fixed vector length—i.e. where ∥x∥1 = L. This distribution is quite different from previous PMRF variants because the normalization is very different as will be described in later sections. For a motivating example, in Fig. 1, we show the marginal distributions of the empirical distribution and fitted models using only three words from the Classic3 dataset that contains documents regarding library sciences and aerospace engineering (See Sec. 4). Clearly, real-world text has positive dependencies as evidenced by the empirical marginals of “boundary” and “layer” (i.e. referring to the boundary layer in fluid dynamics) and LPMRF does the best at fitting this empirical distribution. In addition, the log partition function—and hence the likelihood—for LPMRF can be approximated using sampling as described in later sections. Under the PMRF or TPMRF models, both the log partition function and likelihood were computationally intractable to compute exactly.2 Thus, approximating the log partition function of an LPMRF opens up the door for likelihood-based hyperparameter estimation and model evaluation that was not possible with PMRF. (a) Empirical Marginal Distributions boundary library layer library layer boundary Empirical Distribution (b) Multinomial × Poisson (Ind. Poissons) boundary library layer library layer boundary models.LPMRF(nnz=0,nnzwords=0) (c) Truncated Poisson MRF boundary library layer library layer boundary models.TPMRF(nnz=6,nnzwords=0) (d) Fixed-Length PMRF × Poisson boundary library layer library layer boundary models.LPMRF(nnz=6,nnzwords=0) Figure 1: Marginal Distributions from Classic3 Dataset (Top Left) Empirical Distribution, (Top Right) Estimated Multinomial × Poisson joint distribution—i.e. independent Poissons, (Bottom Left) Truncated Poisson MRF, (Bottom Right) Fixed-Length PMRF × Poisson joint distribution. The simple empirical distribution clearly shows a strong dependency between “boundary” and “layer” but strong negative dependency of “boundary” with “library”. Clearly, the word-independent Multinomial-Poisson distribution underfits the data. While the Truncated PMRF can model dependencies, it obviously has normalization problems because the normalization is dominated by the edge case. The LPMRF-Poisson distribution much more appropriately fits the empirical data. In the topic modeling literature, many researchers have realized the issue with using the Multinomial distribution as the base distribution. For example, the interpretability of a Multinomial can be difficult since it only gives an ordering of words. Thus, multiple metrics have been proposed to evaluate topic models based on the perceived dependencies between words within a topic [7, 8, 9, 10]. In particular, [11] showed that the Multinomial assumption was often violated in real world data. In another paper [12], the LDA topic assignments for each word are used to train a separate Ising model—i.e. a Bernoulli MRF—for each topic in a heuristic two-stage procedure. Instead of modeling dependencies a posteriori, we formulate a generalization of topic models that allows the LPMRF distribution to directly replace the Multinomial. This allows us to compute a topic model and word dependencies jointly under a unified model as opposed to the two-stage heuristic procedure in [12]. 2The example in Fig. 1 was computed by exhaustively computing the log partition function. 2 This model has some connection to the Admixture of Poisson MRFs model (APM) [2], which was the first topic model to consider word dependencies. However, the LPMRF topic model directly relaxes the LDA word-independence assumption (i.e. the independent case is the same as LDA) whereas APM is only an indirect relaxation of LDA because APM mixes in the exponential family canonical parameter space while LDA mixes in the standard Multinomial parameter space. Another difference with APM is that our proposed LPMRF topic model can actually produce topic assignments for each word similar to LDA with Gibbs sampling [13]. Finally, the LPMRF topic model does not fall into the same generalization of topic models as APM because the instance-specific distribution is not an LPMRF—as described more fully in later sections. The follow up APM paper [14] gives a fast algorithm for estimating the PMRF parameters. We use this algorithm as the basis for estimating the topic LPMRF parameters. For estimating the topic vectors for each document, we give a simple coordinate descent algorithm for estimation of the LPMRF topic model. This estimation of topic vectors can be seen as a direct relaxation of LDA and could even provide a different estimation algorithm for LDA. 2 Fixed-Length Poisson MRF Notation Let p, n and k denote the number of words, documents and topics respectively. We will generally use uppercase letters for matrices (e.g. Φ, X), boldface lowercase letters or indices of matrices for column vectors (i.e xi, θ, Φs) and lowercase letters for scalar values (i.e. xi, θs). Poisson MRF Definition First, we will briefly describe the Poisson MRF distribution and refer the reader to [1, 6] for more details. A PMRF can be parameterized by a node vector θ and an edge matrix Φ whose non-zeros encode the direct dependencies between words: PrPMRF(x | θ, Φ) = exp θT x+xT Φx−Pp s=1 log(xs!)−A (θ, Φ) , where A (θ, Φ) is the log partition function needed for normalization. Note that without loss of generality, we can assume Φ is symmetric because it only shows up in the symmetric quadratic term. The conditional distribution of one word given all the others—i.e. Pr(xs|x−s)—is a 1D Poisson distribution with natural parameter ηs = θs +xT −sΦs by construction. One primary issue with the PMRF is that the log partition function (i.e. A (θ, Φ)) is a log-sum over all vectors in Zp + and thus with even one positive dependency, the log partition function is infinite because of the quadratic term in the formulation. Yang et al. [6] tried to address this issue but, as illustrated in the introduction, their proposed modifications to the PMRF can yield unusual models for real-world data. 0 1 2 3 4 5 6 7 8 9 10 0 0.1 0.2 0.3 0.4 Domain of Binomial at L = 10 LPMRF with Different Parameters L = 10 Probability Negative Dependency Independent (Binomial) Positive Dependency 0 2 4 6 8 10 12 14 16 18 20 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Domain of Binomial at L = 20 LPMRF with Different Parameters L = 20 Probability Negative Dependency Independent (Binomial) Positive Dependency Figure 2: LPMRF distribution for L = 10 (left) and L = 20 (right) with negative, zero and positive dependencies. The distribution of LPMRF can be quite different than a Multinomial (zero dependency) and thus provides a much more flexible parametric distribution for count data. LPMRF Definition The Fixed-Length Poisson MRF (LPMRF) distribution is a simple yet fundamentally different distribution than the PMRF. Letting L ≡∥x∥1 be the length of document, we define the LPMRF distribution as follows: Pr LPMRF(x|θ, Φ, L) = exp(θT x + xT Φx −P s log(xs!) −AL(θ, Φ)) (1) AL(θ, Φ) = log X x∈XL exp(θT x + xT Φx −P s log(xs!)) (2) XL = {x : x ∈Zp +, ∥x∥1 = L}. (3) The only difference from the PMRF parametric form is the log partition function AL(θ, Φ) which is conditioned on the set XL (unlike the unbounded set for PMRF). This domain restriction is critical to formulating a tractable and reasonable distribution. Combined with a Poisson distribution on vector length L = ∥x∥1, the LPMRF distribution can be a much more suitable distribution for documents than a Multinomial. The LPMRF distribution reduces to the standard Multinomial if there are no 3 dependencies. However, if there are dependencies, then the distribution can be quite different than a Multinomial as illustrated in Fig. 2 for an LPMRF with p = 2 and L fixed at either 10 or 20 words. After the original submission, we realized that for p = 2 the LPMRF model is the same as the multiplicative binomial generalization in [15]. Thus, the LPMRF model can be seen as a multinomial generalization (p ≥2) of the multiplicative binomial in [15]. LPMRF Parameter Estimation Because the parametric form of the LPMRF model is the same as the form of the PMRF model and we primarily care about finding the correct dependencies, we decide to use the PMRF estimation algorithm described in [14] to estimate θ and Φ. The algorithm in [14] uses an approximation to the likelihood by using the pseudo-likelihood and performing ℓ1 regularized nodewise Poisson regressions. The ℓ1 regularization is important both for the sparsity of the dependencies and the computational efficiency of the algorithm. While the PMRF and LPMRF are different distributions, the pseudo-likelihood approximation for estimation provides good results as shown in the results section. We present timing results to show the scalability of this algorithm in Sec. 5. Other parameter estimation methods would be an interesting area of future work. 2.1 Likelihood and Log Partition Estimation Unlike previous work on the PMRF or TPMRF distributions, we develop a tractable approximation to the LPMRF log partition function (Eq. 2) so that we can compute approximate likelihood values. The likelihood of a model can be fundamentally important for hyperparameter optimization and model evaluation. LPMRF Annealed Importance Sampling First, we develop an LPMRF Gibbs sampler by considering the most common form of Multinomial sampling, namely by taking the sum of a sequence of L Categorical variables. From this intuition, we sample one word at a time while holding all other words fixed. The probability of one word in the sequence wℓgiven all the other words is proportional to exp(θs + 2Φsx−ℓ) where x−ℓis the sum of all other words. See the Appendix for the details of Gibbs sampling. Then, we derive an annealed importance sampler [16] using the Gibbs sampling by scaling the Φ matrix for each successive distribution by the linear sequence starting with 0 and ending with 1 (i.e. γ = 0, . . . , 1). Thus, we start with a simple Multinomial sample from Pr(x | θ, 0 · Φ, L) = PrMult(x | θ, L) and then Gibbs sample from each successive distribution PrLPMRF(x | θ, γΦ, L) updating the sample weight as defined in [16] until we reach the final distribution when γ = 1. From these weighted samples, we can compute an estimate of the log partition function [16]. Upper Bound Using H¨older’s inequality, a simple convex relaxation and the partition function of a Multinomial, an upper bound for the log partition function can be computed: AL(θ, Φ) ≤L2λΦ,1 + Llog(P s exp θs) −log(L!), where λΦ,1 is the maximum eigenvalue of Φ. See the Appendix for the full derivation. We simplify this upper bound by subtracting log(P s exp θs) from θ (which does not change the distribution) so that the second term becomes 0. Then, neglecting the constant term −log(L!) that does not interact with the parameters (θ, Φ), the log partition function is upper bounded by a simple quadratic function w.r.t. L. Weighting Φ for Different L For datasets in which L is observed for every sample but is not uniform—such as document collections, the log partition function will grow quadratically in L if there are any positive dependencies as suggested by the upper bound. This causes long documents to have extremely small likelihood. Thus, we must modify Φ as L gets larger to conteract this effect. We propose a simple modification that scales the Φ for each L: ˜ΦL = ω(L)Φ. In particular, we propose to use the sigmoidal function using the Log Logistic cumulative distribution function (CDF): ω(L) = 1 −LogLogisticCDF(L | αLL, βLL). We set the βLL parameter to 2 so that the tail is O(1/L2) which will eventually cause the upper bound to approach a constant. Letting ¯L = 1 n P i Li be the mean instance length, we choose αLL = c¯L for some small constant c. This choice of αLL helps the weighting function to appropriately scale for corpuses of different average lengths. Final Approximation Method for All L For our experiments, we approximate the log partition function value for all L in the range of the corpus. We use 100 AIS samples for 50 different test values of L linearly spaced between the 0.5¯L and 3¯L so that we cover both small and large values 4 of L. This gives a total of 5,000 annealed importance samples. We use the quadratic form of the upper bound Ua(L) = ω(L)L2a (ignoring constants with respect to Φ) and find a constant a that upper bounds all 50 estimates: amax = maxL[ω(L)L2]−1(ˆAL(θ, Φ)−Llog(P s exp θs)+log(L!)), where ˆAL is an AIS estimate of the log partition function for the 50 test values of L. This gives a smooth approximation for all L that are greater than or equal to all individual estimates (figure of example approximation in Appendix). Mixtures of LPMRF With an approximation to the likelihood, we can easily formulate an estimation algorithm for a mixture of LPMRFs using a simple alternating, EM-like procedure. First, given the cluster assignments, the LPMRF parameters can be estimated as explained above. Then, the best cluster assignments can be computed by assigning each instance to the highest likelihood cluster. Extending the LPMRF to topic models requires more careful analysis as described next. 3 Generalizing Topic Models using Fixed-Length Distributions In standard topic models like LDA, the distribution contains a unique topic variable for every word in the corpus. Essentially, this means that every word is actually drawn from a categorical distribution. However, this does not allow us to capture dependencies between words because there is only one word being drawn at a time. Therefore, we need to reformulate LDA in a way that the words from a topic are sampled jointly from a Multinomial. From this reformulation, we can then simply replace the Multinomial with an LPMRF to obtain a topic model with LPMRF as the base distribution. Our reformulation of LDA groups the topic indicator variables for each word into k vectors corresponding to the k different topics. These k “topic indicator” vectors zl are then assumed to be drawn from a Multinomial with fixed length L = ∥zj∥. This grouping of topic vectors yields an equivalent distribution because the topic indicators are exchangeable and independent of one another given the observed word and the document-topic distribution. This leads to the following generalization of topic models in which an observation xi is the summation of k hidden variables zj i : Generic Topic Model Novel LPMRF Topic Model wi ∼SimplexPrior(α) wi ∼Dirichlet(α) Li ∼LengthDistribution(¯L) Li ∼Poisson(λ = ¯L) mi ∼PartitionDistribution(wi, Li) mi ∼Multinomial(p = wi; N = Li) zj i ∼FixedLengthDist(φj ; ∥zj i ∥= mj i) zj i ∼LPMRF(θj, Φj; L = mj i) xi = Pk j=1 zj i xi = Pk j=1 zj i . Note that this generalization of topic models does not require the partition distribution and the fixedlength distribution to be the same. In addition, other distributions could be substituted for the Dirichlet prior distribution on document-topic distributions like the logistic normal prior. Finally, this generalization allows for real-valued topic models for other types of data although exploration of this is outside the scope of this paper. This generalization is distinctive from the topic model generalization termed “admixtures” in [2]. Admixtures assume that each observation is drawn from an instance-specific base distribution whose parameters are a convex combination of previous parameters. Thus an admixture of LPMRFs could be formulated by assuming that each document, given the document-topic weights wi, is drawn from a LPMRF(¯θi = P j wijθj, ¯Φi = wijΦj; L = ∥xi∥1). Though this may be an interesting model in its own right and useful for further exploration in future work, this is not the same as the above proposed model because the distribution of xi is not an LPMRF but rather a sum of independent LPMRFs. One case—possibly the only case—where these two generalizations of topic models intersect is when the distribution is a Multinomial (i.e. a LPMRF with Φ = 0). As another distinction from APM, the LPMRF topic model directly generalizes LDA because the LPMRF in the above model reduces to a Multinomial if Φ = 0. Fully exploring the differences between this topic model generalization and the admixture generalization are quite interesting but outside the scope of this paper. With this formulation of LPMRF topic models, we can create a joint optimization problem to solve for the topic matrix Zi = [z1 i , z2 i , . . . , zk i ] for each document and to solve for the shared LPMRF 5 parameters θ1...k, Φ1...k. The optimization is based on minimizing the negative log posterior: arg min Z1...n,θ1...k,Φ1...k −1 n n X i=1 k X j=1 Pr LPMRF(zj i |θj, Φj, mj i) − n X i=1 log(Pr prior(m1...k i )) − k X j=1 log(Pr prior(θj, Φj)) s.t. Zie = xi, Zi ∈Zk×p + , where e is the all ones vector. Notice that the observations xi only show up in the constraints. The prior distribution on m1...k i can be related to the Dirichlet distribution as in LDA by taking Prprior(m1...k i ) = PrDir(mj i/ P ℓmℓ i|α). Also, notice that the documents are all independent if the LPMRF parameters are known so this optimization can be trivially parallelized. Connection to Collapsed Gibbs Sampling This optimization is very similar to the collapsed Gibbs sampling for LDA [13]. Essentially, the key part to estimating the topic models is estimating the topic indicators for each word in the corpus. The model parameters can then be estimated directly from these topic indicators. In the case of LDA, the Multinomial parameters are trivial to estimate by merely keeping track of counts and thus the parameters can be updated in constant time for every topic resampled. This also suggests that an interesting area of future work would be to understand the connections between collapsed Gibbs sampling and this optimization problem. It may be possible to use this optimization problem to speed up Gibbs sampling convergence or provide a MAP phase after Gibbs sampling to get non-random estimates. Estimating Topic Matrices Z1...n For LPMRF topic models, the estimation of the LPMRF parameters given the topic assignments requires solving another complex optimization problem. Thus, we pursue an alternating EM-like scheme as in LPMRF mixtures. First, we estimate LPMRF parameters with the PMRF algorithm from [14], and then we optimize the topic matrix Zi ∈Rp×k for each document. Because of the constraints on Zi, we pursue a simple dual coordinate descent procedure. We select two coordinates in row r of Zi and determine if the optimization problem can be improved by moving a words from topic ℓto topic q. Thus, we only need to solve a series of simple univariate problems. Each univariate problem only has xis number of possible solutions and thus if the max count of words in a document is bounded by a constant, the univariate subproblems can be solved efficiently. More formally, we are seeking a step size a such that bZi = Zi + aereT ℓ−aereT q gives a better optimization value than Zi. If we remove constant terms w.r.t. a, we arrive at the following univariate optimization problem (suppressing dependence on i because each of the n subproblems are independent): arg min −zℓ r≤a≤zq r −a[θℓ r −θq r + 2zT ℓΦℓ r −2zT q Φq r] + [log((zℓ r+a)!) + log((zq r −a)!)] + Amℓ+a(θℓ, Φℓ) + Amq+a(θq, Φq) −log(Pr prior( ˜m1...k)), where ˜m is the new distribution of length based on the step size a. The first term is the linear and quadratic term from the sufficient statistics. The second term is the change in base measure of a word is moved. The third term is the difference in log partition function if the length of the topic vectors changes. Note that the log partition function can be precomputed so it merely costs a table lookup. The prior also only requires a simple calculation to update. Thus the main computation comes in the inner product zT ℓΦℓ r. However, this inner product can be maintained very efficiently and updated efficiently so that it does not significantly affect the running time. 4 Perplexity Experiments We evaluated our novel LPMRF model using perplexity on a held-out test set of documents from a corpus composed of research paper abstracts3 denoted Classic3 and a collection of Wikipedia documents. The Classic3 dataset has three distinct topic areas: medical (Medline, 1033), library information sciences (CISI, 1460) and aerospace engineering (CRAN, 1400). 3http://ir.dcs.gla.ac.uk/resources/test_collections/ 6 Experimental Setup We train all the models using a 90% training split of the documents and compute the held-out perplexity on the remaining 10% where perplexity is equal to exp(−L(Xtest|θ1...k, Φ1...k)/Ntest), where L is the log likelihood and Ntest is the total number of words in the test set. We evaluate single, mixture and topic models with both the Multinomial as the base distribution and LPMRF as the base distribution at k = {1, 3, 10, 20}. The topic indicator matrices Zi for the test set are estimated by fitting a MAP-based estimate while holding the topic parameters θ1...k, Φ1...k fixed.4 For a single Multinomial or LPMRF, we set the smoothing parameter β to 10−4. 5 We select the LPMRF models using all combinations of 20 log spaced λ between 1 and 10−3, and 5 linearly spaced weighting function constants c between 1 and 2 for the weighting function described in Sec. 2.1. In order to compare our algorithms with LDA, we also provide perplexity results using an LDA Gibbs sampler [13] for MATLAB 6 to estimate the model parameters. For LDA, we used 2000 iterations and optimized the hyperparameters α and β using the likelihood of a tuning set. We do not seek to compare with many topic models because many of them use the Multinomial as a base distribution which could be replaced by a LPMRF but rather we simply focus on simple representative models.7 Results The perplexity results for all models can be seen in Fig. 3. Clearly, a single LPMRF significantly outperforms a single Multinomial on the test dataset both for the Classic3 and Wikipedia datasets. The LPMRF model outperforms the simple Multinomial mixtures and topic models in all cases. This suggests that the LPMRF model could be an interesting replacement for the Multinomial in more complex models. For a small number of topics, LPMRF topic models also outperforms Gibbs sampling LDA but does not perform as well for larger number of topics. This is likely due to the well-developed sampling methods for learning LDA. Exploring the possibility of incorporating sampling into the fitting of the LPMRF topic model is an excellent area of future work. We believe LPMRF shows significant promise for replacing the Multinomial in various probabilistic models. 47 34 Mult LPMRF Perplexity Classic3 9.3 7.9 Mult LPMRF Perplexity Wikipedia 0 5 10 15 20 25 30 35 k=3 k=10 k=20 k=3 k=10 k=20 Mixture Topic Model Test Set Perplexity for Classic3 Mult LPMRF Gibbs LDA Figure 3: (Left) The LPMRF models quite significantly outperforms the Multinomial for both datasets. (Right) The LPMRF model outperforms the simple Multinomial model in all cases. For a small number of topics, LPMRF topic models also outperforms Gibbs sampling LDA but does not perform as well for larger number of topics. Qualitative Analysis of LPMRF Parameters In addition to perplexity analysis, we present the top words, top positive dependencies and the top negative dependencies for the LPMRF topic model in Table 1. Notice that in LDA, only the top words are available for analysis but an LPMRF topic model can produce intuitive dependencies. For example, the positive dependency “language+natural” is composed of two words that often co-occur in the library sciences but each word independently does not occur very often in comparison to “information” and “library”. The positive dependency “stress+reaction” suggests that some of the documents in the Medline dataset likely refer inducing stress on a subject and measuring the reaction. Or in the aerospace topic, the positive dependency “non+linear” suggests that non-linear equations are important in aerospace. Notice that these concepts could not be discovered with a standard Multinomial-based topic model. 4For topic models, the likelihood computation is intractable if averaging over all possible Zi. Thus, we use a MAP simplification primarily for computational reasons to compare models without computationally expensive likelihood estimation. 5For the LPMRF, this merely means adding 10−4 to y-values of the nodewise Poisson regressions. 6http://psiexp.ss.uci.edu/research/programs_data/toolbox.htm 7We could not compare to APM [2, 14] because it is not computationally tractable to calculate the likelihood of a test instance in APM, and thus we cannot compute perplexity. 7 Table 1: Top Words and Dependencies for LPMRF Topic Model Top words Top Pos. Edges Top Neg. Edges Top words Top Pos. Edges Top Neg. Edges Top words Top Pos. Edges Top Neg. Edges information states+united paper-book patients term+long cells-patient flow supported+simply flow-shells library point+view libraries-retrieval cases positive+negative patients-animals pressure account+taken number-numbers research test+tests library-chemical normal cooling+hypothermi patients-rats boundary agreement+good flow-shell system primary+secondary libraries-language cells system+central hormone-protein results moment+pitching wing-hypersonic libraries recall+precision system-published treatment atmosphere+height growth-parathyroid theory non+linear solutions-turbulent book dissemination+sdi information-citations children function+functions patients-lens method lower+upper mach-reynolds systems direct+access information-citation found methods+suitable patients-mice layer tunnel+wind flow-stresses data language+natural chemical-document results stress+reaction patients-dogs given time+dependent theoretical-drag use years+five library-scientists blood low+rates hormone-tumor number level+noise general-buckling scientific term+long library-scientific disease case+report patients-child presented purpose+note made-conducted Topic 1 Topic 2 Topic 3 5 Timing and Scalability Finally, we explore the practical performance of our algorithms. In C++, we implemented the three core algorithms: fitting p Poisson regressions, fitting the n topic matrices for each document, and sampling 5,000 AIS samples. The timing for each of these components respectively can be seen in Fig. 4 for the Wikipedia dataset. We set λ = 1 in the first two experiments which yields roughly 20,000 non-zeros and varied λ for the third experiment. Each of the components is trivially parallelized using OpenMP (http://openmp.org/). All timing experiments were conducted on the TACC Maverick system with Intel Xeon E5-2680 v2 Ivy Bridge CPUs (2.80 GHz), 20 CPUs per node, and 12.8 GB memory per CPU (https://www.tacc.utexas.edu/). The scaling is generally linear in the parameters except for fitting topic matrices which is O(k2). For the AIS sampling, the scaling is linear in the number of non-zeros in Φ irrespective of p. Overall, we believe our implementations provide both good scaling and practical performance (code available online). 1 10 100 1000 10000 1000 10000 Time in Seconds (log scale) Number of Unique Words (p) (log scale) Timing for Poisson Regressions n=100k n=50k n=10k n=5k 0 200 400 600 800 1000 k=3 k=10 k=3 k=10 p=5000 p=10000 Time (s) Timing for Fitting Topic Matrices n=50000 n=100000 0 100 200 300 400 500 600 700 0 100000 200000 300000 400000 500000 Time (s) Number of Nonzeros in Edge Matrix Timing for AIS Sampling p=10000 p=5000 p=1000 Figure 4: (Left) The timing for fitting p Poisson regressions shows an empirical scaling of O(np). (Middle) The timing for fitting topic matrices empirically shows scaling that is O(npk2). (Right) The timing for AIS sampling shows that the sampling is approximately linearly scaled with the number of non-zeros in Φ irrespective of p. 6 Conclusion We motivated the need for a more flexible distribution than the Multinomial such as the Poisson MRF. However, the PMRF distribution has several complications due to its normalization that hinder it from being a general-purpose model for count data. We overcome these difficulties by restricting the domain to a fixed length as in a Multinomial while retaining the parametric form of the Poisson MRF. By parameterizing by the length of the document, we can then efficiently compute sampling-based estimates of the log partition function and hence the likelihood—which were not tractable to compute under the PMRF model. We extend the LPMRF distribution to both mixtures and topic models by generalizing topic models using fixed-length distributions and develop parameter estimation methods using dual coordinate descent. We evaluate the perplexity of the proposed LPMRF models on datasets and show that they offer good performance when compared to Multinomial-based models. Finally, we show that our algorithms are fast and have good scaling. Potential new areas could be explored such as the relation between the topic matrix optimization method and Gibbs sampling. It may be possible to develop sampling-based methods for the LPMRF topic model similar to Gibbs sampling for LDA. In general, we suggest that the LPMRF model could open up new avenues of research where the Multinomial distribution is currently used. Acknowledgments This work was supported by NSF (DGE-1110007, IIS-1149803, IIS-1447574, DMS-1264033, CCF1320746) and ARO (W911NF-12-1-0390). 8 References [1] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu, “Graphical models via generalized linear models,” in NIPS, pp. 1367–1375, 2012. [2] D. I. Inouye, P. Ravikumar, and I. S. Dhillon, “Admixture of Poisson MRFs: A Topic Model with Word Dependencies,” in International Conference on Machine Learning (ICML), pp. 683–691, 2014. [3] T. Hofmann, “Probabilistic latent semantic analysis,” in Uncertainty in Artificial Intelligence (UAI), pp. 289–296, Morgan Kaufmann Publishers Inc., 1999. [4] D. Blei, A. Ng, and M. Jordan, “Latent dirichlet allocation,” JMLR, vol. 3, pp. 993–1022, 2003. [5] D. Blei, “Probabilistic topic models,” Communications of the ACM, vol. 55, pp. 77–84, Nov. 2012. [6] E. Yang, P. Ravikumar, G. Allen, and Z. Liu., “On poisson graphical models,” in NIPS, pp. 1718–1726, 2013. [7] J. Chang, J. Boyd-Graber, S. Gerrish, C. Wang, and D. Blei, “Reading tea leaves: How humans interpret topic models,” in NIPS, 2009. [8] D. Mimno, H. M. Wallach, E. Talley, M. Leenders, and A. McCallum, “Optimizing semantic coherence in topic models,” in EMNLP, pp. 262–272, 2011. [9] D. Newman, Y. Noh, E. Talley, S. Karimi, and T. Baldwin, “Evaluating topic models for digital libraries,” in ACM/IEEE Joint Conference on Digital Libraries (JCDL), pp. 215–224, 2010. [10] N. Aletras and R. Court, “Evaluating Topic Coherence Using Distributional Semantics,” in International Conference on Computational Semantics (IWCS 2013) - Long Papers, pp. 13–22, 2013. [11] D. Mimno and D. Blei, “Bayesian Checking for Topic Models,” in EMNLP, pp. 227–237, 2011. [12] R. Nallapati, A. Ahmed, W. Cohen, and E. Xing, “Sparse word graphs: A scalable algorithm for capturing word correlations in topic models,” in ICDM, pp. 343–348, 2007. [13] M. Steyvers and T. Griffiths, “Probabilistic topic models,” in Latent Semantic Analysis: A Road to Meaning, pp. 424–440, 2007. [14] D. I. Inouye, P. K. Ravikumar, and I. S. Dhillon, “Capturing Semantically Meaningful Word Dependencies with an Admixture of Poisson MRFs,” in NIPS, pp. 3158–3166, 2014. [15] P. M. E. Altham, “Two Generalizations of the Binomial Distribution,” Journal of the Royal Statistical Society. Series C (Applied Statistics), vol. 27, no. 2, pp. 162–167, 1978. [16] R. M. Neal, “Annealed importance sampling,” Statistics and Computing, vol. 11, no. 2, pp. 125–139, 2001. 9 | 2015 | 379 |
5,901 | Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks Shaoqing Ren∗ Kaiming He Ross Girshick Jian Sun Microsoft Research {v-shren, kahe, rbg, jiansun}@microsoft.com Abstract State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [7] and Fast R-CNN [5] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate highquality region proposals, which are used by Fast R-CNN for detection. With a simple alternating optimization, RPN and Fast R-CNN can be trained to share convolutional features. For the very deep VGG-16 model [19], our detection system has a frame rate of 5fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007 (73.2% mAP) and 2012 (70.4% mAP) using 300 proposals per image. Code is available at https://github.com/ShaoqingRen/faster_rcnn. 1 Introduction Recent advances in object detection are driven by the success of region proposal methods (e.g., [22]) and region-based convolutional neural networks (R-CNNs) [6]. Although region-based CNNs were computationally expensive as originally developed in [6], their cost has been drastically reduced thanks to sharing convolutions across proposals [7, 5]. The latest incarnation, Fast R-CNN [5], achieves near real-time rates using very deep networks [19], when ignoring the time spent on region proposals. Now, proposals are the computational bottleneck in state-of-the-art detection systems. Region proposal methods typically rely on inexpensive features and economical inference schemes. Selective Search (SS) [22], one of the most popular methods, greedily merges superpixels based on engineered low-level features. Yet when compared to efficient detection networks [5], Selective Search is an order of magnitude slower, at 2s per image in a CPU implementation. EdgeBoxes [24] currently provides the best tradeoff between proposal quality and speed, at 0.2s per image. Nevertheless, the region proposal step still consumes as much running time as the detection network. One may note that fast region-based CNNs take advantage of GPUs, while the region proposal methods used in research are implemented on the CPU, making such runtime comparisons inequitable. An obvious way to accelerate proposal computation is to re-implement it for the GPU. This may be an effective engineering solution, but re-implementation ignores the down-stream detection network and therefore misses important opportunities for sharing computation. In this paper, we show that an algorithmic change—computing proposals with a deep net—leads to an elegant and effective solution, where proposal computation is nearly cost-free given the de∗Shaoqing Ren is with the University of Science and Technology of China. This work was done when he was an intern at Microsoft Research. 1 tection network’s computation. To this end, we introduce novel Region Proposal Networks (RPNs) that share convolutional layers with state-of-the-art object detection networks [7, 5]. By sharing convolutions at test-time, the marginal cost for computing proposals is small (e.g., 10ms per image). Our observation is that the convolutional (conv) feature maps used by region-based detectors, like Fast R-CNN, can also be used for generating region proposals. On top of these conv features, we construct RPNs by adding two additional conv layers: one that encodes each conv map position into a short (e.g., 256-d) feature vector and a second that, at each conv map position, outputs an objectness score and regressed bounds for k region proposals relative to various scales and aspect ratios at that location (k = 9 is a typical value). Our RPNs are thus a kind of fully-convolutional network (FCN) [14] and they can be trained end-toend specifically for the task for generating detection proposals. To unify RPNs with Fast R-CNN [5] object detection networks, we propose a simple training scheme that alternates between fine-tuning for the region proposal task and then fine-tuning for object detection, while keeping the proposals fixed. This scheme converges quickly and produces a unified network with conv features that are shared between both tasks. We evaluate our method on the PASCAL VOC detection benchmarks [4], where RPNs with Fast R-CNNs produce detection accuracy better than the strong baseline of Selective Search with Fast R-CNNs. Meanwhile, our method waives nearly all computational burdens of SS at test-time—the effective running time for proposals is just 10 milliseconds. Using the expensive very deep models of [19], our detection method still has a frame rate of 5fps (including all steps) on a GPU, and thus is a practical object detection system in terms of both speed and accuracy (73.2% mAP on PASCAL VOC 2007 and 70.4% mAP on 2012). Code is available at https://github.com/ ShaoqingRen/faster_rcnn. 2 Related Work Several recent papers have proposed ways of using deep networks for locating class-specific or classagnostic bounding boxes [21, 18, 3, 20]. In the OverFeat method [18], a fully-connected (fc) layer is trained to predict the box coordinates for the localization task that assumes a single object. The fc layer is then turned into a conv layer for detecting multiple class-specific objects. The MultiBox methods [3, 20] generate region proposals from a network whose last fc layer simultaneously predicts multiple (e.g., 800) boxes, which are used for R-CNN [6] object detection. Their proposal network is applied on a single image or multiple large image crops (e.g., 224×224) [20]. We discuss OverFeat and MultiBox in more depth later in context with our method. Shared computation of convolutions [18, 7, 2, 5] has been attracting increasing attention for efficient, yet accurate, visual recognition. The OverFeat paper [18] computes conv features from an image pyramid for classification, localization, and detection. Adaptively-sized pooling (SPP) [7] on shared conv feature maps is proposed for efficient region-based object detection [7, 16] and semantic segmentation [2]. Fast R-CNN [5] enables end-to-end detector training on shared conv features and shows compelling accuracy and speed. 3 Region Proposal Networks A Region Proposal Network (RPN) takes an image (of any size) as input and outputs a set of rectangular object proposals, each with an objectness score.1 We model this process with a fullyconvolutional network [14], which we describe in this section. Because our ultimate goal is to share computation with a Fast R-CNN object detection network [5], we assume that both nets share a common set of conv layers. In our experiments, we investigate the Zeiler and Fergus model [23] (ZF), which has 5 shareable conv layers and the Simonyan and Zisserman model [19] (VGG), which has 13 shareable conv layers. To generate region proposals, we slide a small network over the conv feature map output by the last shared conv layer. This network is fully connected to an n × n spatial window of the input conv 1“Region” is a generic term and in this paper we only consider rectangular regions, as is common for many methods (e.g., [20, 22, 24]). “Objectness” measures membership to a set of object classes vs. background. 2 car : 1.000 dog : 0.997 person : 0.992 person : 0.979 horse : 0.993 conv feature map intermediate layer 256-d 2k scores 4k coordinates sliding window reg layer cls layer k anchor boxes bus : 0.996 person : 0.736 boat : 0.970 person : 0.989 person : 0.983 person : 0.983 person : 0.925 cat : 0.982 dog : 0.994 Figure 1: Left: Region Proposal Network (RPN). Right: Example detections using RPN proposals on PASCAL VOC 2007 test. Our method detects objects in a wide range of scales and aspect ratios. feature map. Each sliding window is mapped to a lower-dimensional vector (256-d for ZF and 512-d for VGG). This vector is fed into two sibling fully-connected layers—a box-regression layer (reg) and a box-classification layer (cls). We use n = 3 in this paper, noting that the effective receptive field on the input image is large (171 and 228 pixels for ZF and VGG, respectively). This mininetwork is illustrated at a single position in Fig. 1 (left). Note that because the mini-network operates in a sliding-window fashion, the fully-connected layers are shared across all spatial locations. This architecture is naturally implemented with an n × n conv layer followed by two sibling 1 × 1 conv layers (for reg and cls, respectively). ReLUs [15] are applied to the output of the n × n conv layer. Translation-Invariant Anchors At each sliding-window location, we simultaneously predict k region proposals, so the reg layer has 4k outputs encoding the coordinates of k boxes. The cls layer outputs 2k scores that estimate probability of object / not-object for each proposal.2 The k proposals are parameterized relative to k reference boxes, called anchors. Each anchor is centered at the sliding window in question, and is associated with a scale and aspect ratio. We use 3 scales and 3 aspect ratios, yielding k = 9 anchors at each sliding position. For a conv feature map of a size W ×H (typically ∼2,400), there are WHk anchors in total. An important property of our approach is that it is translation invariant, both in terms of the anchors and the functions that compute proposals relative to the anchors. As a comparison, the MultiBox method [20] uses k-means to generate 800 anchors, which are not translation invariant. If one translates an object in an image, the proposal should translate and the same function should be able to predict the proposal in either location. Moreover, because the MultiBox anchors are not translation invariant, it requires a (4+1)×800-dimensional output layer, whereas our method requires a (4+2)×9-dimensional output layer. Our proposal layers have an order of magnitude fewer parameters (27 million for MultiBox using GoogLeNet [20] vs. 2.4 million for RPN using VGG-16), and thus have less risk of overfitting on small datasets, like PASCAL VOC. A Loss Function for Learning Region Proposals For training RPNs, we assign a binary class label (of being an object or not) to each anchor. We assign a positive label to two kinds of anchors: (i) the anchor/anchors with the highest Intersectionover-Union (IoU) overlap with a ground-truth box, or (ii) an anchor that has an IoU overlap higher than 0.7 with any ground-truth box. Note that a single ground-truth box may assign positive labels to multiple anchors. We assign a negative label to a non-positive anchor if its IoU ratio is lower than 0.3 for all ground-truth boxes. Anchors that are neither positive nor negative do not contribute to the training objective. With these definitions, we minimize an objective function following the multi-task loss in Fast RCNN [5]. Our loss function for an image is defined as: L({pi}, {ti}) = 1 Ncls X i Lcls(pi, p∗ i ) + λ 1 Nreg X i p∗ i Lreg(ti, t∗ i ). (1) 2For simplicity we implement the cls layer as a two-class softmax layer. Alternatively, one may use logistic regression to produce k scores. 3 Here, i is the index of an anchor in a mini-batch and pi is the predicted probability of anchor i being an object. The ground-truth label p∗ i is 1 if the anchor is positive, and is 0 if the anchor is negative. ti is a vector representing the 4 parameterized coordinates of the predicted bounding box, and t∗ i is that of the ground-truth box associated with a positive anchor. The classification loss Lcls is log loss over two classes (object vs. not object). For the regression loss, we use Lreg(ti, t∗ i ) = R(ti −t∗ i ) where R is the robust loss function (smooth L1) defined in [5]. The term p∗ i Lreg means the regression loss is activated only for positive anchors (p∗ i = 1) and is disabled otherwise (p∗ i = 0). The outputs of the cls and reg layers consist of {pi} and {ti} respectively. The two terms are normalized with Ncls and Nreg, and a balancing weight λ.3 For regression, we adopt the parameterizations of the 4 coordinates following [6]: tx = (x −xa)/wa, ty = (y −ya)/ha, tw = log(w/wa), th = log(h/ha), t∗ x = (x∗−xa)/wa, t∗ y = (y∗−ya)/ha, t∗ w = log(w∗/wa), t∗ h = log(h∗/ha), where x, y, w, and h denote the two coordinates of the box center, width, and height. Variables x, xa, and x∗are for the predicted box, anchor box, and ground-truth box respectively (likewise for y, w, h). This can be thought of as bounding-box regression from an anchor box to a nearby ground-truth box. Nevertheless, our method achieves bounding-box regression by a different manner from previous feature-map-based methods [7, 5]. In [7, 5], bounding-box regression is performed on features pooled from arbitrarily sized regions, and the regression weights are shared by all region sizes. In our formulation, the features used for regression are of the same spatial size (n × n) on the feature maps. To account for varying sizes, a set of k bounding-box regressors are learned. Each regressor is responsible for one scale and one aspect ratio, and the k regressors do not share weights. As such, it is still possible to predict boxes of various sizes even though the features are of a fixed size/scale. Optimization The RPN, which is naturally implemented as a fully-convolutional network [14], can be trained end-to-end by back-propagation and stochastic gradient descent (SGD) [12]. We follow the “imagecentric” sampling strategy from [5] to train this network. Each mini-batch arises from a single image that contains many positive and negative anchors. It is possible to optimize for the loss functions of all anchors, but this will bias towards negative samples as they are dominate. Instead, we randomly sample 256 anchors in an image to compute the loss function of a mini-batch, where the sampled positive and negative anchors have a ratio of up to 1:1. If there are fewer than 128 positive samples in an image, we pad the mini-batch with negative ones. We randomly initialize all new layers by drawing weights from a zero-mean Gaussian distribution with standard deviation 0.01. All other layers (i.e., the shared conv layers) are initialized by pretraining a model for ImageNet classification [17], as is standard practice [6]. We tune all layers of the ZF net, and conv3 1 and up for the VGG net to conserve memory [5]. We use a learning rate of 0.001 for 60k mini-batches, and 0.0001 for the next 20k mini-batches on the PASCAL dataset. We also use a momentum of 0.9 and a weight decay of 0.0005 [11]. Our implementation uses Caffe [10]. Sharing Convolutional Features for Region Proposal and Object Detection Thus far we have described how to train a network for region proposal generation, without considering the region-based object detection CNN that will utilize these proposals. For the detection network, we adopt Fast R-CNN [5]4 and now describe an algorithm that learns conv layers that are shared between the RPN and Fast R-CNN. Both RPN and Fast R-CNN, trained independently, will modify their conv layers in different ways. We therefore need to develop a technique that allows for sharing conv layers between the two networks, rather than learning two separate networks. Note that this is not as easy as simply defining a single network that includes both RPN and Fast R-CNN, and then optimizing it jointly with backpropagation. The reason is that Fast R-CNN training depends on fixed object proposals and it is 3In our early implementation (as also in the released code), λ was set as 10, and the cls term in Eqn.(1) was normalized by the mini-batch size (i.e., Ncls = 256) and the reg term was normalized by the number of anchor locations (i.e., Nreg ∼2, 400). Both cls and reg terms are roughly equally weighted in this way. 4https://github.com/rbgirshick/fast-rcnn 4 not clear a priori if learning Fast R-CNN while simultaneously changing the proposal mechanism will converge. While this joint optimizing is an interesting question for future work, we develop a pragmatic 4-step training algorithm to learn shared features via alternating optimization. In the first step, we train the RPN as described above. This network is initialized with an ImageNetpre-trained model and fine-tuned end-to-end for the region proposal task. In the second step, we train a separate detection network by Fast R-CNN using the proposals generated by the step-1 RPN. This detection network is also initialized by the ImageNet-pre-trained model. At this point the two networks do not share conv layers. In the third step, we use the detector network to initialize RPN training, but we fix the shared conv layers and only fine-tune the layers unique to RPN. Now the two networks share conv layers. Finally, keeping the shared conv layers fixed, we fine-tune the fc layers of the Fast R-CNN. As such, both networks share the same conv layers and form a unified network. Implementation Details We train and test both region proposal and object detection networks on single-scale images [7, 5]. We re-scale the images such that their shorter side is s = 600 pixels [5]. Multi-scale feature extraction may improve accuracy but does not exhibit a good speed-accuracy trade-off [5]. We also note that for ZF and VGG nets, the total stride on the last conv layer is 16 pixels on the re-scaled image, and thus is ∼10 pixels on a typical PASCAL image (∼500×375). Even such a large stride provides good results, though accuracy may be further improved with a smaller stride. For anchors, we use 3 scales with box areas of 1282, 2562, and 5122 pixels, and 3 aspect ratios of 1:1, 1:2, and 2:1. We note that our algorithm allows the use of anchor boxes that are larger than the underlying receptive field when predicting large proposals. Such predictions are not impossible— one may still roughly infer the extent of an object if only the middle of the object is visible. With this design, our solution does not need multi-scale features or multi-scale sliding windows to predict large regions, saving considerable running time. Fig. 1 (right) shows the capability of our method for a wide range of scales and aspect ratios. The table below shows the learned average proposal size for each anchor using the ZF net (numbers for s = 600). anchor 1282, 2:1 1282, 1:1 1282, 1:2 2562, 2:1 2562, 1:1 2562, 1:2 5122, 2:1 5122, 1:1 5122, 1:2 proposal 188×111 113×114 70×92 416×229 261×284 174×332 768×437 499×501 355×715 The anchor boxes that cross image boundaries need to be handled with care. During training, we ignore all cross-boundary anchors so they do not contribute to the loss. For a typical 1000 × 600 image, there will be roughly 20k (≈60 × 40 × 9) anchors in total. With the cross-boundary anchors ignored, there are about 6k anchors per image for training. If the boundary-crossing outliers are not ignored in training, they introduce large, difficult to correct error terms in the objective, and training does not converge. During testing, however, we still apply the fully-convolutional RPN to the entire image. This may generate cross-boundary proposal boxes, which we clip to the image boundary. Some RPN proposals highly overlap with each other. To reduce redundancy, we adopt nonmaximum suppression (NMS) on the proposal regions based on their cls scores. We fix the IoU threshold for NMS at 0.7, which leaves us about 2k proposal regions per image. As we will show, NMS does not harm the ultimate detection accuracy, but substantially reduces the number of proposals. After NMS, we use the top-N ranked proposal regions for detection. In the following, we train Fast R-CNN using 2k RPN proposals, but evaluate different numbers of proposals at test-time. 4 Experiments We comprehensively evaluate our method on the PASCAL VOC 2007 detection benchmark [4]. This dataset consists of about 5k trainval images and 5k test images over 20 object categories. We also provide results in the PASCAL VOC 2012 benchmark for a few models. For the ImageNet pre-trained network, we use the “fast” version of ZF net [23] that has 5 conv layers and 3 fc layers, and the public VGG-16 model5 [19] that has 13 conv layers and 3 fc layers. We primarily evaluate detection mean Average Precision (mAP), because this is the actual metric for object detection (rather than focusing on object proposal proxy metrics). Table 1 (top) shows Fast R-CNN results when trained and tested using various region proposal methods. These results use the ZF net. For Selective Search (SS) [22], we generate about 2k SS 5www.robots.ox.ac.uk/˜vgg/research/very_deep/ 5 Table 1: Detection results on PASCAL VOC 2007 test set (trained on VOC 2007 trainval). The detectors are Fast R-CNN with ZF, but using various proposal methods for training and testing. train-time region proposals test-time region proposals method # boxes method # proposals mAP (%) SS 2k SS 2k 58.7 EB 2k EB 2k 58.6 RPN+ZF, shared 2k RPN+ZF, shared 300 59.9 ablation experiments follow below RPN+ZF, unshared 2k RPN+ZF, unshared 300 58.7 SS 2k RPN+ZF 100 55.1 SS 2k RPN+ZF 300 56.8 SS 2k RPN+ZF 1k 56.3 SS 2k RPN+ZF (no NMS) 6k 55.2 SS 2k RPN+ZF (no cls) 100 44.6 SS 2k RPN+ZF (no cls) 300 51.4 SS 2k RPN+ZF (no cls) 1k 55.8 SS 2k RPN+ZF (no reg) 300 52.1 SS 2k RPN+ZF (no reg) 1k 51.3 SS 2k RPN+VGG 300 59.2 proposals by the “fast” mode. For EdgeBoxes (EB) [24], we generate the proposals by the default EB setting tuned for 0.7 IoU. SS has an mAP of 58.7% and EB has an mAP of 58.6%. RPN with Fast R-CNN achieves competitive results, with an mAP of 59.9% while using up to 300 proposals6. Using RPN yields a much faster detection system than using either SS or EB because of shared conv computations; the fewer proposals also reduce the region-wise fc cost. Next, we consider several ablations of RPN and then show that proposal quality improves when using the very deep network. Ablation Experiments. To investigate the behavior of RPNs as a proposal method, we conducted several ablation studies. First, we show the effect of sharing conv layers between the RPN and Fast R-CNN detection network. To do this, we stop after the second step in the 4-step training process. Using separate networks reduces the result slightly to 58.7% (RPN+ZF, unshared, Table 1). We observe that this is because in the third step when the detector-tuned features are used to fine-tune the RPN, the proposal quality is improved. Next, we disentangle the RPN’s influence on training the Fast R-CNN detection network. For this purpose, we train a Fast R-CNN model by using the 2k SS proposals and ZF net. We fix this detector and evaluate the detection mAP by changing the proposal regions used at test-time. In these ablation experiments, the RPN does not share features with the detector. Replacing SS with 300 RPN proposals at test-time leads to an mAP of 56.8%. The loss in mAP is because of the inconsistency between the training/testing proposals. This result serves as the baseline for the following comparisons. Somewhat surprisingly, the RPN still leads to a competitive result (55.1%) when using the topranked 100 proposals at test-time, indicating that the top-ranked RPN proposals are accurate. On the other extreme, using the top-ranked 6k RPN proposals (without NMS) has a comparable mAP (55.2%), suggesting NMS does not harm the detection mAP and may reduce false alarms. Next, we separately investigate the roles of RPN’s cls and reg outputs by turning off either of them at test-time. When the cls layer is removed at test-time (thus no NMS/ranking is used), we randomly sample N proposals from the unscored regions. The mAP is nearly unchanged with N = 1k (55.8%), but degrades considerably to 44.6% when N = 100. This shows that the cls scores account for the accuracy of the highest ranked proposals. On the other hand, when the reg layer is removed at test-time (so the proposals become anchor boxes), the mAP drops to 52.1%. This suggests that the high-quality proposals are mainly due to regressed positions. The anchor boxes alone are not sufficient for accurate detection. 6For RPN, the number of proposals (e.g., 300) is the maximum number for an image. RPN may produce fewer proposals after NMS, and thus the average number of proposals is smaller. 6 Table 2: Detection results on PASCAL VOC 2007 test set. The detector is Fast R-CNN and VGG16. Training data: “07”: VOC 2007 trainval, “07+12”: union set of VOC 2007 trainval and VOC 2012 trainval. For RPN, the train-time proposals for Fast R-CNN are 2k. †: this was reported in [5]; using the repository provided by this paper, this number is higher (68.0±0.3 in six runs). method # proposals data mAP (%) time (ms) SS 2k 07 66.9† 1830 SS 2k 07+12 70.0 1830 RPN+VGG, unshared 300 07 68.5 342 RPN+VGG, shared 300 07 69.9 198 RPN+VGG, shared 300 07+12 73.2 198 Table 3: Detection results on PASCAL VOC 2012 test set. The detector is Fast R-CNN and VGG16. Training data: “07”: VOC 2007 trainval, “07++12”: union set of VOC 2007 trainval+test and VOC 2012 trainval. For RPN, the train-time proposals for Fast R-CNN are 2k. †: http:// host.robots.ox.ac.uk:8080/anonymous/HZJTQA.html. ‡: http://host.robots.ox.ac.uk:8080/ anonymous/YNPLXB.html method # proposals data mAP (%) SS 2k 12 65.7 SS 2k 07++12 68.4 RPN+VGG, shared† 300 12 67.0 RPN+VGG, shared‡ 300 07++12 70.4 Table 4: Timing (ms) on a K40 GPU, except SS proposal is evaluated in a CPU. “Region-wise” includes NMS, pooling, fc, and softmax. See our released code for the profiling of running time. model system conv proposal region-wise total rate VGG SS + Fast R-CNN 146 1510 174 1830 0.5 fps VGG RPN + Fast R-CNN 141 10 47 198 5 fps ZF RPN + Fast R-CNN 31 3 25 59 17 fps We also evaluate the effects of more powerful networks on the proposal quality of RPN alone. We use VGG-16 to train the RPN, and still use the above detector of SS+ZF. The mAP improves from 56.8% (using RPN+ZF) to 59.2% (using RPN+VGG). This is a promising result, because it suggests that the proposal quality of RPN+VGG is better than that of RPN+ZF. Because proposals of RPN+ZF are competitive with SS (both are 58.7% when consistently used for training and testing), we may expect RPN+VGG to be better than SS. The following experiments justify this hypothesis. Detection Accuracy and Running Time of VGG-16. Table 2 shows the results of VGG-16 for both proposal and detection. Using RPN+VGG, the Fast R-CNN result is 68.5% for unshared features, slightly higher than the SS baseline. As shown above, this is because the proposals generated by RPN+VGG are more accurate than SS. Unlike SS that is pre-defined, the RPN is actively trained and benefits from better networks. For the feature-shared variant, the result is 69.9%—better than the strong SS baseline, yet with nearly cost-free proposals. We further train the RPN and detection network on the union set of PASCAL VOC 2007 trainval and 2012 trainval, following [5]. The mAP is 73.2%. On the PASCAL VOC 2012 test set (Table 3), our method has an mAP of 70.4% trained on the union set of VOC 2007 trainval+test and VOC 2012 trainval, following [5]. In Table 4 we summarize the running time of the entire object detection system. SS takes 1-2 seconds depending on content (on average 1.51s), and Fast R-CNN with VGG-16 takes 320ms on 2k SS proposals (or 223ms if using SVD on fc layers [5]). Our system with VGG-16 takes in total 198ms for both proposal and detection. With the conv features shared, the RPN alone only takes 10ms computing the additional layers. Our region-wise computation is also low, thanks to fewer proposals (300). Our system has a frame-rate of 17 fps with the ZF net. Analysis of Recall-to-IoU. Next we compute the recall of proposals at different IoU ratios with ground-truth boxes. It is noteworthy that the Recall-to-IoU metric is just loosely [9, 8, 1] related to the ultimate detection accuracy. It is more appropriate to use this metric to diagnose the proposal method than to evaluate it. 7 Figure 2: Recall vs. IoU overlap ratio on the PASCAL VOC 2007 test set. Table 5: One-Stage Detection vs. Two-Stage Proposal + Detection. Detection results are on the PASCAL VOC 2007 test set using the ZF model and Fast R-CNN. RPN uses unshared features. regions detector mAP (%) Two-Stage RPN + ZF, unshared 300 Fast R-CNN + ZF, 1 scale 58.7 One-Stage dense, 3 scales, 3 asp. ratios 20k Fast R-CNN + ZF, 1 scale 53.8 One-Stage dense, 3 scales, 3 asp. ratios 20k Fast R-CNN + ZF, 5 scales 53.9 In Fig. 2, we show the results of using 300, 1k, and 2k proposals. We compare with SS and EB, and the N proposals are the top-N ranked ones based on the confidence generated by these methods. The plots show that the RPN method behaves gracefully when the number of proposals drops from 2k to 300. This explains why the RPN has a good ultimate detection mAP when using as few as 300 proposals. As we analyzed before, this property is mainly attributed to the cls term of the RPN. The recall of SS and EB drops more quickly than RPN when the proposals are fewer. One-Stage Detection vs. Two-Stage Proposal + Detection. The OverFeat paper [18] proposes a detection method that uses regressors and classifiers on sliding windows over conv feature maps. OverFeat is a one-stage, class-specific detection pipeline, and ours is a two-stage cascade consisting of class-agnostic proposals and class-specific detections. In OverFeat, the region-wise features come from a sliding window of one aspect ratio over a scale pyramid. These features are used to simultaneously determine the location and category of objects. In RPN, the features are from square (3×3) sliding windows and predict proposals relative to anchors with different scales and aspect ratios. Though both methods use sliding windows, the region proposal task is only the first stage of RPN + Fast R-CNN—the detector attends to the proposals to refine them. In the second stage of our cascade, the region-wise features are adaptively pooled [7, 5] from proposal boxes that more faithfully cover the features of the regions. We believe these features lead to more accurate detections. To compare the one-stage and two-stage systems, we emulate the OverFeat system (and thus also circumvent other differences of implementation details) by one-stage Fast R-CNN. In this system, the “proposals” are dense sliding windows of 3 scales (128, 256, 512) and 3 aspect ratios (1:1, 1:2, 2:1). Fast R-CNN is trained to predict class-specific scores and regress box locations from these sliding windows. Because the OverFeat system uses an image pyramid, we also evaluate using conv features extracted from 5 scales. We use those 5 scales as in [7, 5]. Table 5 compares the two-stage system and two variants of the one-stage system. Using the ZF model, the one-stage system has an mAP of 53.9%. This is lower than the two-stage system (58.7%) by 4.8%. This experiment justifies the effectiveness of cascaded region proposals and object detection. Similar observations are reported in [5, 13], where replacing SS region proposals with sliding windows leads to ∼6% degradation in both papers. We also note that the one-stage system is slower as it has considerably more proposals to process. 5 Conclusion We have presented Region Proposal Networks (RPNs) for efficient and accurate region proposal generation. By sharing convolutional features with the down-stream detection network, the region proposal step is nearly cost-free. Our method enables a unified, deep-learning-based object detection system to run at 5-17 fps. The learned RPN also improves region proposal quality and thus the overall object detection accuracy. 8 References [1] N. Chavali, H. Agrawal, A. Mahendru, and D. Batra. Object-Proposal Evaluation Protocol is ’Gameable’. arXiv: 1505.05836, 2015. [2] J. Dai, K. He, and J. Sun. Convolutional feature masking for joint object and stuff segmentation. In CVPR, 2015. [3] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In CVPR, 2014. [4] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results, 2007. [5] R. Girshick. Fast R-CNN. arXiv:1504.08083, 2015. [6] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [7] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV. 2014. [8] J. Hosang, R. Benenson, P. Doll´ar, and B. Schiele. What makes for effective detection proposals? arXiv:1502.05082, 2015. [9] J. Hosang, R. Benenson, and B. Schiele. How good are detection proposals, really? In BMVC, 2014. [10] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093, 2014. [11] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [12] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural computation, 1989. [13] K. Lenc and A. Vedaldi. R-CNN minus R. arXiv:1506.06981, 2015. [14] J. Long, E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [15] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. [16] S. Ren, K. He, R. Girshick, X. Zhang, and J. Sun. Object detection networks on convolutional feature maps. arXiv:1504.06066, 2015. [17] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. arXiv:1409.0575, 2014. [18] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. [19] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [20] C. Szegedy, S. Reed, D. Erhan, and D. Anguelov. Scalable, high-quality object detection. arXiv:1412.1441v2, 2015. [21] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In NIPS, 2013. [22] J. R. Uijlings, K. E. van de Sande, T. Gevers, and A. W. Smeulders. Selective search for object recognition. IJCV, 2013. [23] M. D. Zeiler and R. Fergus. Visualizing and understanding convolutional neural networks. In ECCV, 2014. [24] C. L. Zitnick and P. Doll´ar. Edge boxes: Locating object proposals from edges. In ECCV, 2014. 9 | 2015 | 38 |
5,902 | Stochastic Expectation Propagation Yingzhen Li University of Cambridge Cambridge, CB2 1PZ, UK yl494@cam.ac.uk Jos´e Miguel Hern´andez-Lobato Harvard University Cambridge, MA 02138 USA jmh@seas.harvard.edu Richard E. Turner University of Cambridge Cambridge, CB2 1PZ, UK ret26@cam.ac.uk Abstract Expectation propagation (EP) is a deterministic approximation algorithm that is often used to perform approximate Bayesian parameter learning. EP approximates the full intractable posterior distribution through a set of local approximations that are iteratively refined for each datapoint. EP can offer analytic and computational advantages over other approximations, such as Variational Inference (VI), and is the method of choice for a number of models. The local nature of EP appears to make it an ideal candidate for performing Bayesian learning on large models in large-scale dataset settings. However, EP has a crucial limitation in this context: the number of approximating factors needs to increase with the number of datapoints, N, which often entails a prohibitively large memory overhead. This paper presents an extension to EP, called stochastic expectation propagation (SEP), that maintains a global posterior approximation (like VI) but updates it in a local way (like EP). Experiments on a number of canonical learning problems using synthetic and real-world datasets indicate that SEP performs almost as well as full EP, but reduces the memory consumption by a factor of N. SEP is therefore ideally suited to performing approximate Bayesian learning in the large model, large dataset setting. 1 Introduction Recently a number of methods have been developed for applying Bayesian learning to large datasets. Examples include sampling approximations [1, 2], distributional approximations including stochastic variational inference (SVI) [3] and assumed density filtering (ADF) [4], and approaches that mix distributional and sampling approximations [5, 6]. One family of approximation method has garnered less attention in this regard: Expectation Propagation (EP) [7, 8]. EP constructs a posterior approximation by iterating simple local computations that refine factors which approximate the posterior contribution from each datapoint. At first sight, it therefore appears well suited to large-data problems: the locality of computation make the algorithm simple to parallelise and distribute, and good practical performance on a range of small data applications suggest that it will be accurate [9, 10, 11]. However the elegance of local computation has been bought at the price of prohibitive memory overhead that grows with the number of datapoints N, since local approximating factors need to be maintained for every datapoint, which typically incur the same memory overhead as the global approximation. The same pathology exists for the broader class of power EP (PEP) algorithms [12] that includes variational message passing [13]. In contrast, variational inference (VI) methods [14, 15] utilise global approximations that are refined directly, which prevents memory overheads from scaling with N. Is there ever a case for preferring EP (or PEP) to VI methods for large data? We believe that there certainly is. First, EP can provide significantly more accurate approximations. It is well known that variational free-energy approaches are biased and often severely so [16] and for particular models the variational free-energy objective is pathologically ill-suited such as those with non-smooth likelihood functions [11, 17]. Second, the fact that EP is truly local (to factors in the posterior distri1 bution and not just likelihoods) means that it affords different opportunities for tractable algorithm design, as the updates can be simpler to approximate. As EP appears to be the method of choice for some applications, researchers have attempted to push it to scale. One approach is to swallow the large computational burden and simply use large data structures to store the approximating factors (e.g. TrueSkill [18]). This approach can only be pushed so far. A second approach is to use ADF, a simple variant of EP that only requires a global approximation to be maintained in memory [19]. ADF, however, provides poorly calibrated uncertainty estimates [7] which was one of the main motivating reasons for developing EP in the first place. A third idea, complementary to the one described here, is to use approximating factors that have simpler structure (e.g. low rank, [20]). This reduces memory consumption (e.g. for Gaussian factors from O(ND2) to O(ND)), but does not stop the scaling with N. Another idea uses EP to carve up the dataset [5, 6] using approximating factors for collections of datapoints. This results in coarse-grained, rather than local, updates and other methods must be used to compute them. (Indeed, the spirit of [5, 6] is to extend sampling methods to large datasets, not EP itself.) Can we have the best of both worlds? That is, accurate global approximations that are derived from truly local computation. To address this question we develop an algorithm based upon the standard EP and ADF algorithms that maintains a global approximation which is updated in a local way. We call this class of algorithms Stochastic Expectation Propagation (SEP) since it updates the global approximation with (damped) stochastic estimates on data sub-samples in an analogous way to SVI. Indeed, the generalisation of the algorithm to the PEP setting directly relates to SVI. Importantly, SEP reduces the memory footprint by a factor of N when compared to EP. We further extend the method to control the granularity of the approximation, and to treat models with latent variables without compromising on accuracy or unnecessary memory demands. Finally, we demonstrate the scalability and accuracy of the method on a number of real world and synthetic datasets. 2 Expectation Propagation and Assumed Density Filtering We begin by briefly reviewing the EP and ADF algorithms upon which our new method is based. Consider for simplicity observing a dataset comprising N i.i.d. samples D = {xn}N n=1 from a probabilistic model p(x|θ) parametrised by an unknown D-dimensional vector θ that is drawn from a prior p0(θ). Exact Bayesian inference involves computing the (typically intractable) posterior distribution of the parameters given the data, p(θ|D) ∝p0(θ) N Y n=1 p(xn|θ) ≈q(θ) ∝p0(θ) N Y n=1 fn(θ). (1) Here q(θ) is a simpler tractable approximating distribution that will be refined by EP. The goal of EP is to refine the approximate factors so that they capture the contribution of each of the likelihood terms to the posterior i.e. fn(θ) ≈p(xn|θ). In this spirit, one approach would be to find each approximating factor fn(θ) by minimising the Kullback-Leibler (KL) divergence between the posterior and the distribution formed by replacing one of the likelihoods by its corresponding approximating factor, KL[p(θ|D)||p(θ|D)fn(θ)/p(xn|θ)]. Unfortunately, such an update is still intractable as it involves computing the full posterior. Instead, EP approximates this procedure by replacing the exact leave-one-out posterior p−n(θ) ∝p(θ|D)/p(xn|θ) on both sides of the KL by the approximate leave-one-out posterior (called the cavity distribution) q−n(θ) ∝q(θ)/fn(θ). Since this couples the updates for the approximating factors, the updates must now be iterated. In more detail, EP iterates four simple steps. First, the factor selected for update is removed from the approximation to produce the cavity distribution. Second, the corresponding likelihood is included to produce the tilted distribution ˜pn(θ) ∝q−n(θ)p(xn|θ). Third EP updates the approximating factor by minimising KL[˜pn(θ)||q−n(θ)fn(θ)]. The hope is that the contribution the true-likelihood makes to the posterior is similar to the effect the same likelihood has on the tilted distribution. If the approximating distribution is in the exponential family, as is often the case, then the KL minimisation reduces to a moment matching step [21] that we denote fn(θ) ←proj[˜pn(θ)]/q−n(θ). Finally, having updated the factor, it is included into the approximating distribution. We summarise the update procedure for a single factor in Algorithm 1. Critically, the approximation step of EP involves local computations since one likelihood term is treated at a time. The assumption 2 Algorithm 1 EP 1: choose a factor fn to refine: 2: compute cavity distribution q−n(θ) ∝q(θ)/fn(θ) 3: compute tilted distribution ˜pn(θ) ∝p(xn|θ)q−n(θ) 4: moment matching: fn(θ) ←proj[˜pn(θ)]/q−n(θ) 5: inclusion: q(θ) ←q−n(θ)fn(θ) Algorithm 2 ADF 1: choose a datapoint xn ∼D: 2: compute cavity distribution q−n(θ) = q(θ) 3: compute tilted distribution ˜pn(θ) ∝p(xn|θ)q−n(θ) 4: moment matching: fn(θ) ←proj[˜pn(θ)]/q−n(θ) 5: inclusion: q(θ) ←q−n(θ)fn(θ) Algorithm 3 SEP 1: choose a datapoint xn ∼D: 2: compute cavity distribution q−1(θ) ∝q(θ)/f(θ) 3: compute tilted distribution ˜pn(θ) ∝p(xn|θ)q−1(θ) 4: moment matching: fn(θ) ←proj[˜pn(θ)]/q−1(θ) 5: inclusion: q(θ) ←q−1(θ)fn(θ) 6: implicit update: f(θ) ←f(θ)1−1 N fn(θ) 1 N Figure 1: Comparing the Expectation Propagation (EP), Assumed Density Filtering (ADF), and Stochastic Expectation Propagation (SEP) update steps. Typically, the algorithms will be initialised using q(θ) = p0(θ) and, where appropriate, fn(θ) = 1 or f(θ) = 1. is that these local computations, although possibly requiring further approximation, are far simpler to handle compared to the full posterior p(θ|D). In practice, EP often performs well when the updates are parallelised. Moreover, by using approximating factors for groups of datapoints, and then running additional approximate inference algorithms to perform the EP updates (which could include nesting EP), EP carves up the data making it suitable for distributed approximate inference. There is, however, one wrinkle that complicates deployment of EP at scale. Computation of the cavity distribution requires removal of the current approximating factor, which means any implementation of EP must store them explicitly necessitating an O(N) memory footprint. One option is to simply ignore the removal step replacing the cavity distribution with the full approximation, resulting in the ADF algorithm (Algorithm 2) that needs only maintain a global approximation in memory. But as the moment matching step now over-counts the underlying approximating factor (consider the new form of the objective KL[q(θ)p(xn|θ)||q(θ)]) the variance of the approximation shrinks to zero as multiple passes are made through the dataset. Early stopping is therefore required to prevent overfitting and generally speaking ADF does not return uncertainties that are well-calibrated to the posterior. In the next section we introduce a new algorithm that sidesteps EP’s large memory demands whilst avoiding the pathological behaviour of ADF. 3 Stochastic Expectation Propagation In this section we introduce a new algorithm, inspired by EP, called Stochastic Expectation Propagation (SEP) that combines the benefits of local approximation (tractability of updates, distributability, and parallelisability) with global approximation (reduced memory demands). The algorithm can be interpreted as a version of EP in which the approximating factors are tied, or alternatively as a corrected version of ADF that prevents overfitting. The key idea is that, at convergence, the approximating factors in EP can be interpreted as parameterising a global factor, f(θ), that captures the average effect of a likelihood on the posterior f(θ)N △= QN n=1 fn(θ) ≈QN n=1 p(xn|θ). In this spirit, the new algorithm employs direct iterative refinement of a global approximation comprising the prior and N copies of a single approximating factor, f(θ), that is q(θ) ∝f(θ)Np0(θ). SEP uses updates that are analogous to EP’s in order to refine f(θ) in such a way that it captures the average effect a likelihood function has on the posterior. First the cavity distribution is formed by removing one of the copies of the factor, q−1(θ) ∝q(θ)/f(θ). Second, the corresponding likelihood is included to produce the tilted distribution ˜pn(θ) ∝q−1(θ)p(xn|θ) and, third, SEP finds an intermediate factor approximation by moment matching, fn(θ) ←proj[˜pn(θ)]/q−1(θ). Finally, having updated the factor, it is included into the approximating distribution. It is important here not to make a full update since fn(θ) captures the effect of just a single likelihood function p(xn|θ). Instead, damping should be employed to make a partial update f(θ) ←f(θ)1−ϵfn(θ)ϵ. A natural choice uses ϵ = 1/N which can be interpreted as minimising KL[˜pn(θ)||p0(θ)f(θ)N] 3 in the moment update, but other choices of ϵ may be more appropriate, including decreasing ϵ according to the Robbins-Monro condition [22]. SEP is summarised in Algorithm 3. Unlike ADF, the cavity is formed by dividing out f(θ) which captures the average affect of the likelihood and prevents the posterior from collapsing. Like ADF, however, SEP only maintains the global approximation q(θ) since f(θ) ∝(q(θ)/p0(θ)) 1 N and q−1(θ) ∝q(θ)1−1 N p0(θ) 1 N . When Gaussian approximating factors are used, for example, SEP reduces the storage requirement of EP from O(ND2) to O(D2) which is a substantial saving that enables models with many parameters to be applied to large datasets. 4 Algorithmic extensions to SEP and theoretical results SEP has been motivated from a practical perspective by the limitations inherent in EP and ADF. In this section we extend SEP in four orthogonal directions relate SEP to SVI. Many of the algorithms described here are summarised in Figure 2 and they are detailed in the supplementary material. 4.1 Parallel SEP: relating the EP fixed points to SEP The SEP algorithm outlined above approximates one likelihood at a time which can be computationally slow. However, it is simple to parallelise the SEP updates by following the same recipe by which EP is parallelised. Consider a minibatch comprising M datapoints (for a full parallel batch update use M = N). First we form the cavity distribution for each likelihood. Unlike EP these are all identical. Next, in parallel, compute M intermediate factors fm(θ) ←proj[˜pm(θ)]/q−1(θ). In EP these intermediate factors become the new likelihood approximations and the approximation is updated to q(θ) = p0(θ) Q n̸=m fn(θ) Q m fm(θ). In SEP, the same update is used for the approximating distribution, which becomes q(θ) ←p0(θ)fold(θ)N−M Q m fm(θ) and, by implication, the approximating factor is fnew(θ) = fold(θ)1−M/N QM m=1 fm(θ)1/N. One way of understanding parallel SEP is as a double loop algorithm. The inner loop produces intermediate approximations qm(θ) ←arg minq KL[˜pm(θ)||q(θ)]; these are then combined in the outer loop: q(θ) ←arg minq PM m=1 KL[q(θ)||qm(θ)] + (N −M)KL[q(θ)||qold(θ)]. For M = 1 parallel SEP reduces to the original SEP algorithm. For M = N parallel SEP is equivalent to the so-called Averaged EP algorithm proposed in [23] as a theoretical tool to study the convergence properties of normal EP. This work showed that, under fairly restrictive conditions (likelihood functions that are log-concave and varying slowly as a function of the parameters), AEP converges to the same fixed points as EP in the large data limit (N →∞). There is another illuminating connection between SEP and AEP. Since SEP’s approximating factor f(θ) converges to the geometric average of the intermediate factors ¯f(θ) ∝[QN n=1 fn(θ)] 1 N , SEP converges to the same fixed points as AEP if the learning rates satisfy the Robbins-Monro condition [22], and therefore under certain conditions [23], to the same fixed points as EP. But it is still an open question whether there are more direct relationships between EP and SEP. 4.2 Stochastic power EP: relationships to variational methods The relationship between variational inference and stochastic variational inference [3] mirrors the relationship between EP and SEP. Can these relationships be made more formal? If the moment projection step in EP is replaced by a natural parameter matching step then the resulting algorithm is equivalent to the Variational Message Passing (VMP) algorithm [24] (and see supplementary material). Moreover, VMP has the same fixed points as variational inference [13] (since minimising the local variational KL divergences is equivalent to minimising the global variational KL). These results carry over to the new algorithms with minor modifications. Specifically VMP can be transformed into SVMP by replacing VMP’s local approximations with the global form employed by SEP. In the supplementary material we show that this algorithm is an instance of standard SVI and that it therefore has the same fixed points as VI when ϵ satisfies the Robbins-Monro condition [22]. More generally, the procedure can be applied any member of the power EP (PEP) [12] family of algorithms which replace the moment projection step in EP with alpha-divergence minimization 4 alpha divergence updates parallel minibatch updates multiple approximating factors K=N K=1 M=1 M=N a=1 a=-1 SEP AEP EP PEP VMP AVMP par-VMP par-SEP AEP: Averaged EP AVMP: Averaged VMP EP: Expectation Propagation SEP EP VMP VI AEP AVMP PEP: Power EP SEP: Stochastic EP SVMP: Stochastic VMP same (stochastic methods) same same in large data limit (conditions apply) par-EP: EP with parallel updates par-SEP: SEP with parallel updates par-VMP: VMP with parallel updates VI: Variational Inference VMP: Variational Message Passing A) Relationships between algorithms B) Relationships between fixed points Figure 2: Relationships between algorithms. Note that care needs to be taken when interpreting the alpha-divergence as a →−1 (see supplementary material). [21], but care has to be taken when taking the limiting cases (see supplementary). These results lend weight to the view that SEP is a natural stochastic generalisation of EP. 4.3 Distributed SEP: controlling granularity of the approximation EP uses a fine-grained approximation comprising a single factor for each likelihood. SEP, on the other hand, uses a coarse-grained approximation comprising a signal global factor to approximate the average effect of all likelihood terms. One might worry that SEP’s approximation is too severe if the dataset contains sets of datapoints that have very different likelihood contributions (e.g. for odd-vs-even handwritten digits classification consider the affect of a 5 and a 9 on the posterior). It might be more sensible in such cases to partition the dataset into K disjoint pieces {Dk = {xn}Nk n=Nk−1}K k=1 with N = PK k=1 Nk and use an approximating factor for each partition. If normal EP updates are performed on the subsets, i.e. treating p(Dk|θ) as a single true factor to be approximated, we arrive at the Distributed EP algorithm [5, 6]. But such updates are challenging as multiple likelihood terms must be included during each update necessitating additional approximations (e.g. MCMC). A simpler alternative uses SEP/AEP inside each partition, implying a posterior approximation of the form q(θ) ∝p0(θ) QK k=1 fk(θ)Nk with fk(θ)Nk approximating p(Dk|θ). The limiting cases of this algorithm, when K = 1 and K = N, recover SEP and EP respectively. 4.4 SEP with latent variables Many applications of EP involve latent variable models. Although this is not the main focus of the paper, we show that SEP is applicable in this case without scaling the memory footprint with N. Consider a model containing hidden variables, hn, associated with each observation p(xn, hn|θ) that are drawn i.i.d. from a prior p0(hn). The goal is to approximate the true posterior over parameters and hidden variables p(θ, {hn}|D) ∝p0(θ) Q n p0(hn)p(xn|hn, θ). Typically, EP would approximate the effect of each intractable term as p(xn|hn, θ)p0(hn) ≈fn(θ)gn(hn). Instead, SEP ties the approximate parameter factors p(xn|hn, θ)p0(hn) ≈f(θ)gn(hn) yielding: q(θ, {hn}) △∝p0(θ)f(θ)N N Y n=1 gn(hn). (2) Critically, as proved in supplementary, the local factors gn(hn) do not need to be maintained in memory. This means that all of the advantages of SEP carry over to more complex models involving latent variables, although this can potentially increase computation time in cases where updates for gn(hn) are not analytic, since then they will be initialised from scratch at each update. 5 5 Experiments The purpose of the experiments was to evaluate SEP on a number of datasets (synthetic and realworld, small and large) and on a number of models (probit regression, mixture of Gaussians and Bayesian neural networks). 5.1 Bayesian probit regression The first experiments considered a simple Bayesian classification problem and investigated the stability and quality of SEP in relation to EP and ADF as well as the effect of using minibatches and varying the granularity of the approximation. The model comprised a probit likelihood function P(yn = 1|θ) = Φ(θT xn) and a Gaussian prior over the hyper-plane parameter p(θ) = N(θ; 0, γI). The synthetic data comprised N = 5, 000 datapoints {(xn, yn)}, where xn were D = 4 dimensional and were either sampled from a single Gaussian distribution (Fig. 3(a)) or from a mixture of Gaussians (MoGs) with J = 5 components (Fig. 3(b)) to investigate the sensitivity of the methods to the homogeneity of the dataset. The labels were produced by sampling from the generative model. We followed [6] measuring the performance by computing an approximation of KL[p(θ|D)||q(θ)], where p(θ|D) was replaced by a Gaussian that had the same mean and covariance as samples drawn from the posterior using the No-U-Turn sampler (NUTS) [25], to quantify the calibration of uncertainty estimations. Results in Fig. 3(a) indicate that EP is the best performing method and that ADF collapses towards a delta function. SEP converges to a solution which appears to be of similar quality to that obtained by EP for the dataset containing Gaussian inputs, but slightly worse when the MoGs was used. Variants of SEP that used larger mini-batches fluctuated less, but typically took longer to converge (although for the small minibatches shown this effect is not clear). The utility of finer grained approximations depended on the homogeneity of the data. For the second dataset containing MoG inputs (shown in Fig. 3(b)), finer-grained approximations were found to be advantageous if the datapoints from each mixture component are assigned to the same approximating factor. Generally it was found that there is no advantage to retaining more approximating factors than there were clusters in the dataset. To verify whether these conclusions about the granularity of the approximation hold in real datasets, we sampled N = 1, 000 datapoints for each of the digits in MNIST and performed odd-vs-even classification. Each digit class was assigned its own global approximating factor, K = 10. We compare the log-likelihood of a test set using ADF, SEP (K = 1), full EP and DSEP (K = 10) in Figure 3(c). EP and DSEP significantly outperform ADF. DSEP is slightly worse than full EP initially, however it reduces the memory to 0.001% of full EP without losing accuracy substantially. SEP’s accuracy was still increasing at the end of learning and was slightly better than ADF. Further empirical comparisons are reported in the supplementary, and in summary the three EP methods are indistinguishable when likelihood functions have similar contributions to the posterior. Finally, we tested SEP’s performance on six small binary classification datasets from the UCI machine learning repository.1 We did not consider the effect of mini-batches or the granularity of the approximation, using K = M = 1. We ran the tests with damping and stopped learning after convergence (by monitoring the updates of approximating factors). The classification results are summarised in Table 1. ADF performs reasonably well on the mean classification error metric, presumably because it tends to learn a good approximation to the posterior mode. However, the posterior variance is poorly approximated and therefore ADF returns poor test log-likelihood scores. EP achieves significantly higher test log-likelihood than ADF indicating that a superior approximation to the posterior variance is attained. Crucially, SEP performs very similarly to EP, implying that SEP is an accurate alternative to EP even though it is refining a cheaper global posterior approximation. 5.2 Mixture of Gaussians for clustering The small scale experiments on probit regression indicate that SEP performs well for fully-observed probabilistic models. Although it is not the main focus of the paper, we sought to test the flexibility of the method by applying it to a latent variable model, specifically a mixture of Gaussians. A synthetic MoGs dataset containing N = 200 datapoints was constructed comprising J = 4 Gaussians. 1https://archive.ics.uci.edu/ml/index.html 6 (a) (b) (c) Figure 3: Bayesian logistic regression experiments. Panels (a) and (b) show synthetic data experiments. Panel (c) shows the results on MNIST (see text for full details). Table 1: Average test results all methods on probit regression. All methods appear to capture the posterior’s mode, however EP outperforms ADF in terms of test log-likelihood on almost all of the datasets, with SEP performing similarly to EP. mean error test log-likelihood Dataset ADF SEP EP ADF SEP EP Australian 0.328±0.0127 0.325±0.0135 0.330±0.0133 -0.634±0.010 -0.631±0.009 -0.631±0.009 Breast 0.037±0.0045 0.034±0.0034 0.034±0.0039 -0.100±0.015 -0.094±0.011 -0.093±0.011 Crabs 0.056±0.0133 0.033±0.0099 0.036±0.0113 -0.242±0.012 -0.125±0.013 -0.110±0.013 Ionos 0.126±0.0166 0.130±0.0147 0.131±0.0149 -0.373±0.047 -0.336±0.029 -0.324±0.028 Pima 0.242±0.0093 0.244±0.0098 0.241±0.0093 -0.516±0.013 -0.514±0.012 -0.513±0.012 Sonar 0.198±0.0208 0.198±0.0217 0.198±0.0243 -0.461±0.053 -0.418±0.021 -0.415±0.021 The means were sampled from a Gaussian distribution, p(µj) = N(µ; m, I), the cluster identity variables were sampled from a uniform categorical distribution p(hn = j) = 1/4, and each mixture component was isotropic p(xn|hn) = N(xn; µhn, 0.52I). EP, ADF and SEP were performed to approximate the joint posterior over the cluster means {µj} and cluster identity variables {hn} (the other parameters were assumed known). Figure 4(a) visualises the approximate posteriors after 200 iterations. All methods return good estimates for the means, but ADF collapses towards a point estimate as expected. SEP, in contrast, captures the uncertainty and returns nearly identical approximations to EP. The accuracy of the methods is quantified in Fig. 4(b) by comparing the approximate posteriors to those obtained from NUTS. In this case the approximate KL-divergence measure is analytically intractable, instead we used the averaged F-norm of the difference of the Gaussian parameters fitted by NUTS and EP methods. These measures confirm that SEP approximates EP well in this case. 5.3 Probabilistic backpropagation The final set of tests consider more complicated models and large datasets. Specifically we evaluate the methods for probabilistic backpropagation (PBP) [4], a recent state-of-the-art method for scalable Bayesian learning in neural network models. Previous implementations of PBP perform several iterations of ADF over the training data. The moment matching operations required by ADF are themselves intractable and they are approximated by first propagating the uncertainty on the synaptic weights forward through the network in a sequential way, and then computing the gradient of the marginal likelihood by backpropagation. ADF is used to reduce the large memory cost that would be required by EP when the amount of available data is very large. We performed several experiments to assess the accuracy of different implementations of PBP based on ADF, SEP and EP on regression datasets following the same experimental protocol as in [4] (see supplementary material). We considered neural networks with 50 hidden units (except for Year and Protein which we used 100). Table 2 shows the average test RMSE and test log-likelihood for each method. Interestingly, SEP can outperform EP in this setting (possibly because the stochasticity enabled it to find better solutions), and typically it performed similarly. Memory reductions using 7 (a) (b) Figure 4: Posterior approximation for the mean of the Gaussian components. (a) visualises posterior approximations over the cluster means (98% confidence level). The coloured dots indicate the true label (top-left) or the inferred cluster assignments (the rest). In (b) we show the error (in F-norm) of the approximate Gaussians’ means (top) and covariances (bottom). Table 2: Average test results for all methods. Datasets are also from the UCI machine learning repository. RMSE test log-likelihood Dataset ADF SEP EP ADF SEP EP Kin8nm 0.098±0.0007 0.088±0.0009 0.089±0.0006 0.896±0.006 1.013±0.011 1.005±0.007 Naval 0.006±0.0000 0.002±0.0000 0.004±0.0000 3.731±0.006 4.590±0.014 4.207±0.011 Power 4.124±0.0345 4.165±0.0336 4.191±0.0349 -2.837±0.009 -2.846±0.008 -2.852±0.008 Protein 4.727±0.0112 4.670±0.0109 4.748±0.0137 -2.973±0.003 -2.961±0.003 -2.979±0.003 Wine 0.635±0.0079 0.650±0.0082 0.637±0.0076 -0.968±0.014 -0.976±0.013 -0.958±0.011 Year 8.879± NA 8.922±NA 8.914±NA -3.603± NA -3.924±NA -3.929±NA SEP instead of EP were large e.g. 694Mb for the Protein dataset and 65,107Mb for the Year dataset (see supplementary). Surprisingly ADF often outperformed EP, although the results presented for ADF use a near-optimal number of sweeps and further iterations generally degraded performance. ADF’s good performance is most likely due to an interaction with additional moment approximation required in PBP that is more accurate as the number of factors increases. 6 Conclusions and future work This paper has presented the stochastic expectation propagation method for reducing EP’s large memory consumption which is prohibitive for large datasets. We have connected the new algorithm to a number of existing methods including assumed density filtering, variational message passing, variational inference, stochastic variational inference and averaged EP. Experiments on Bayesian logistic regression (both synthetic and real world) and mixture of Gaussians clustering indicated that the new method had an accuracy that was competitive with EP. Experiments on the probabilistic back-propagation on large real world regression datasets again showed that SEP comparably to EP with a vastly reduced memory footprint. Future experimental work will focus on developing data-partitioning methods to leverage finer-grained approximations (DESP) that showed promising experimental performance and also mini-batch updates. There is also a need for further theoretical understanding of these algorithms, and indeed EP itself. Theoretical work will study the convergence properties of the new algorithms for which we only have limited results at present. Systematic comparisons of EP-like algorithms and variational methods will guide practitioners to choosing the appropriate scheme for their application. Acknowledgements We thank the reviewers for valuable comments. YL thanks the Schlumberger Foundation Faculty for the Future fellowship on supporting her PhD study. JMHL acknowledges support from the Rafael del Pino Foundation. RET thanks EPSRC grant # EP/G050821/1 and EP/L000776/1. 8 References [1] Sungjin Ahn, Babak Shahbaba, and Max Welling. Distributed stochastic gradient mcmc. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1044–1052, 2014. [2] R´emi Bardenet, Arnaud Doucet, and Chris Holmes. Towards scaling up markov chain monte carlo: an adaptive subsampling approach. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 405–413, 2014. [3] Matthew D. Hoffman, David M. Blei, Chong Wang, and John William Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303–1347, 2013. [4] Jos´e Miguel Hern´andez-Lobato and Ryan P. Adams. Probabilistic backpropagation for scalable learning of bayesian neural networks. arXiv:1502.05336, 2015. [5] Andrew Gelman, Aki Vehtari, Pasi Jylnki, Christian Robert, Nicolas Chopin, and John P. Cunningham. Expectation propagation as a way of life. arXiv:1412.4869, 2014. [6] Minjie Xu, Balaji Lakshminarayanan, Yee Whye Teh, Jun Zhu, and Bo Zhang. Distributed bayesian posterior sampling via moment sharing. In NIPS, 2014. [7] Thomas P. Minka. Expectation propagation for approximate Bayesian inference. In Uncertainty in Artificial Intelligence, volume 17, pages 362–369, 2001. [8] Manfred Opper and Ole Winther. Expectation consistent approximate inference. The Journal of Machine Learning Research, 6:2177–2204, 2005. [9] Malte Kuss and Carl Edward Rasmussen. Assessing approximate inference for binary gaussian process classification. The Journal of Machine Learning Research, 6:1679–1704, 2005. [10] Simon Barthelm´e and Nicolas Chopin. Expectation propagation for likelihood-free inference. Journal of the American Statistical Association, 109(505):315–333, 2014. [11] John P Cunningham, Philipp Hennig, and Simon Lacoste-Julien. Gaussian probabilities and expectation propagation. arXiv preprint arXiv:1111.6832, 2011. [12] Thomas P. Minka. Power EP. Technical Report MSR-TR-2004-149, Microsoft Research, Cambridge, 2004. [13] John M Winn and Christopher M Bishop. Variational message passing. In Journal of Machine Learning Research, pages 661–694, 2005. [14] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [15] Matthew James Beal. Variational algorithms for approximate Bayesian inference. PhD thesis, University of London, 2003. [16] Richard E. Turner and Maneesh Sahani. Two problems with variational expectation maximisation for time-series models. In D. Barber, T. Cemgil, and S. Chiappa, editors, Bayesian Time series models, chapter 5, pages 109–130. Cambridge University Press, 2011. [17] Richard E. Turner and Maneesh Sahani. Probabilistic amplitude and frequency demodulation. In J. Shawe-Taylor, R.S. Zemel, P. Bartlett, F.C.N. Pereira, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 24, pages 981–989. 2011. [18] Ralf Herbrich, Tom Minka, and Thore Graepel. Trueskill: A bayesian skill rating system. In Advances in Neural Information Processing Systems, pages 569–576, 2006. [19] Peter S. Maybeck. Stochastic models, estimation and control. Academic Press, 1982. [20] Yuan Qi, Ahmed H Abdel-Gawad, and Thomas P Minka. Sparse-posterior gaussian processes for general likelihoods. In Uncertainty and Artificial Intelligence (UAI), 2010. [21] Shun-ichi Amari and Hiroshi Nagaoka. Methods of information geometry, volume 191. Oxford University Press, 2000. [22] Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400–407, 1951. [23] Guillaume Dehaene and Simon Barthelm´e. Expectation propagation in the large-data limit. arXiv:1503.08060, 2015. [24] Thomas Minka. Divergence measures and message passing. Technical Report MSR-TR-2005-173, Microsoft Research, Cambridge, 2005. [25] Matthew D Hoffman and Andrew Gelman. The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo. The Journal of Machine Learning Research, 15(1):1593–1623, 2014. 9 | 2015 | 380 |
5,903 | Beyond Sub-Gaussian Measurements: High-Dimensional Structured Estimation with Sub-Exponential Designs Vidyashankar Sivakumar Arindam Banerjee Department of Computer Science & Engineering University of Minnesota, Twin Cities {sivakuma,banerjee}@cs.umn.edu Pradeep Ravikumar Department of Computer Science University of Texas, Austin pradeepr@cs.utexas.edu Abstract We consider the problem of high-dimensional structured estimation with normregularized estimators, such as Lasso, when the design matrix and noise are drawn from sub-exponential distributions. Existing results only consider sub-Gaussian designs and noise, and both the sample complexity and non-asymptotic estimation error have been shown to depend on the Gaussian width of suitable sets. In contrast, for the sub-exponential setting, we show that the sample complexity and the estimation error will depend on the exponential width of the corresponding sets, and the analysis holds for any norm. Further, using generic chaining, we show that the exponential width for any set will be at most √log p times the Gaussian width of the set, yielding Gaussian width based results even for the sub-exponential case. Further, for certain popular estimators, viz Lasso and Group Lasso, using a VCdimension based analysis, we show that the sample complexity will in fact be the same order as Gaussian designs. Our general analysis and results are the first in the sub-exponential setting, and are readily applicable to special sub-exponential families such as log-concave and extreme-value distributions. 1 Introduction We consider the following problem of high dimensional linear regression: y = Xθ∗+ ω , (1) where y ∈Rn is the response vector, X ∈Rn×p has independent isotropic sub-exponential random rows, ω ∈Rn has i.i.d sub-exponential entries and the number of covariates p is much larger compared to the number of samples n. Given y, X and assuming that θ∗is ‘structured’, usually characterized as having a small value according to some norm R(·), the problem is to recover ˆθ close to θ∗. Considerable progress has been made over the past decade on high-dimensional structured estimation using suitable M-estimators or norm-regularized regression [16, 2] of the form: ˆθλn = argmin θ∈Rp 1 2n∥y −Xθ∥2 2 + λnR(θ) , (2) where R(θ) is a suitable norm, and λn > 0 is the regularization parameter. Early work focused on high-dimensional estimation of sparse vectors using the Lasso and related estimators, where R(θ) = ∥θ∥1 [13, 22, 23]. Sample complexity of such estimators have been rigorously established based on the RIP(restricted isometry property) [4, 5] and the more general RE(restricted eigenvalue) conditions [3, 16, 2]. Several subsequent advances have considered structures beyond ℓ1, using more general norms such as (overlapping) group sparse norms, k-support norm, nuclear norm, and so on [16, 8, 7]. In recent years, much of the literature has been unified and nonasymptotic estimation error bound analysis techniques have been developed for regularized estimation with any norm [2]. 1 In spite of such advances, most of the existing literature relies on the assumption that entries in the design matrix X ∈Rn×p are sub-Gaussian. In particular, recent unified treatments based on decomposable norms, atomic norms, or general norms all rely on concentration properties of subGaussian distributions [16, 7, 2]. Certain estimators, such as the Dantzig selector and variants, consider a constrained problem rather than a regularized problem as in (2) but the analysis again relies on entries of X being sub-Gaussian [6, 8]. For the setting of constrained estimation, building on prior work by [10], [20] outlines a possible strategy for such analysis which can work for any distribution, but works out details only for the sub-Gaussian case. In recent work [9] considered sub-Gaussian design matrices but with heavy-tailed noise, and suggested modifying the estimator in (1) via a median-of-means type estimator based on multiple estimates of ˆθ from sub-samples. In this paper, we establish results for the norm-regularized estimation problem as in (2) for any norm R(θ) under the assumption that elements Xij of the design matrix X ∈Rn×p follow a subexponential distribution, whose tails are dominated by scaled versions of the (symmetric) exponential distribution, i.e., P(|Xij| > t) ≤c1 exp(−t/c2) for all t ≥0 and for suitable constants c1, c2 [12, 21]. To understand the motivation of our work, note that in most of machine learning and statistics, unlike in compressed sensing, the design matrix cannot be chosen but gets determined by the problem. In many application domains like finance, climate science, ecology, social network analysis, etc., variables with heavier tails than sub-Gaussians are frequently encountered. For example in climate science, to understand the relationship between extreme value phenomena like heavy precipitation variables from the extreme-value distributions are used. While high dimensional statistical techniques have been used in practice for such applications, currently lacking is the theoretical guarantees on their performance. Note that the class of sub-exponential distributions have heavier tails compared to sub-Gaussians but have all moments. To the best of our knowledge, this is the first paper to analyze regularized high-dimensional estimation problems of the form (2) with sub-exponential design matrices and noise. In our main result, we obtain bounds on the estimation error ∥ˆ∆n∥2 = ∥ˆθλn −θ∗∥2, where θ∗is the optimal structured parameter. The sample complexity bounds are log p worse compared to the sub-Gaussian case. For example for the ℓ1 norm, we obtain n = O(s log2 p) sample complexity bound instead of O(s log p) for the sub-Gaussian case. The analysis depends on two key ingredients which have been discussed in previous work [16, 2]: 1. The satisfaction of the RE condition on a set A which is the error set associated with the norm, and 2. The design matrix-noise interaction manifested in the form of lower bounds on the regularization parameter. Specifically, the RE condition depends on the properties of the design matrix. We outline two different approaches for obtaining the sample complexity, to satisfy the RE condition: one based on the ‘exponential width’ of A and another based on the VC-dimension of linear predictors drawn from A [10, 20, 11]. For two widely used cases, Lasso and group-lasso, we show that the VC-dimension based analysis leads to a sharp bound on the sample complexity, which is exactly the same order as that for sub-Gaussian design matrices! In particular, for Lasso with s-sparsity, O(s log p) samples are sufficient to satisfy the RE condition for sub-exponential designs. Further, we show that the bound on the regularization parameter depends on the ‘exponential width’ we(ΩR) of the unit norm ball ΩR = {u ∈Rp|R(u) ≤1}. Through a careful argument based on generic chaining [19], we show that for any set T ⊂Rp, the exponential width we(T) ≤cwg(T)√log p, where wg(T) is the Gaussian width of the set T and c is an absolute constant. Recent advances on computing or bounding wg(T) for various structured sets can then be used to bound we(T). Again, for the case of Lasso, we(ΩR) ≤c log p. The rest of the paper is organized as follows. In Section 2 we describe various aspects of the problem and highlight our contributions. In Section 3 we establish a key result on the relationship between Gaussian and exponential widths of sets which will be used for our subsequent analysis. In Section 4 we establish results on the regularization parameter λn, RE constant κ and the non-asymptotic estimation error ∥ˆ∆n∥2. We show some experimental results before concluding in Section 6. 2 Background and Preliminaries In this section, we describe various aspects of the problem, introducing notations along the way, and highlight our contributions. Throughout the paper values of constants change from line to line. 2 2.1 Problem setup We consider the problem defined in (2). The goal of this paper is to establish conditions for consistent estimation and derive bounds on ∥ˆ∆n∥2 = ∥ˆθ −θ∗∥2. Error set: Under the assumption λn ≥βR∗( 1 nXT (y−Xθ∗)), β > 1, the error vector ˆ∆n = ˆθ−θ∗ lies in a cone A ⊆Sp−1 [3, 16, 2]. Regularization parameter: For β > 1, λn ≥βR∗( 1 nXT (y −Xθ)) following analysis in [16, 2]. Restricted Eigenvalue (RE) conditions: For consistent estimation, the design matrix X should satisfy the following RE condition infu∈A 1 √n∥Xu∥2 ≥κ on the error set A for some constant κ > 0 [3, 16, 2, 20, 18]. The RE sample complexity is the number of samples n required to satisfy the RE condition and has been shown to be related to the Gaussian width of the error set. [7, 2, 20]. Deterministic recovery bounds: If X satisfies the RE condition on the error set A and λn satisfies the assumptions stated earlier, [2] show the error bound ∥ˆ∆n∥2 ≤cΨ(A) λn κ with high probability (w.h.p), for some constant c, where Ψ(A) = supu∈A R(u) ∥u∥2 is the norm compatibility constant. ℓ1 norm regularization: One example for R(·) we will consider throughout the paper is the ℓ1 norm regularization. In particular we will always consider ∥θ∗∥0 = s. Group-sparse norms: Another popular example we consider is the group-sparse norm. Let G = {G1, G2, . . . , GNG} denote a collection of groups, which are blocks of any vector θ ∈Rp. For any vector θ ∈Rp, let θNG denote a vector with coordinates θNG i = θi if i ∈GNG, else θNG i = 0. Let m = maxi∈[1,··· ,NG] |Gi| be the maximum size of any group. In the group sparse setting for any subset SG ⊆{1, 2, . . . , NG} with cardinality |SG| = sG, we assume that the parameter vector θ∗∈Rp satisfies θ∗NG = ⃗0, ∀NG ̸∈SG. Such a vector is called SG-group sparse. We will focus on the case when R(θ) = PNG i=1 ∥θi∥2. 2.2 Contributions One of our major results is the relationship between the Gaussian and exponential width of sets using arguments from generic chaining [19]. Existing analysis frameworks for our problem for sub-Gaussian X and ω obtain results in terms of Gaussian widths of suitable sets associated with the norm [2, 20]. For sub-exponential X and ω this dependency, in some cases, is replaced by the exponential width of the set. By establishing a precise relationship between the two quantities, we leverage existing results on the computation of Gaussian widths for our scenario. Another contribution is obtaining the same order of the RE sample complexity bound as for the sub-Gaussian case for ℓ1 and group-sparse norms. While this strong result has already been explored in [11] for ℓ1, we adapt it for our analysis framework and also extend it to the group-sparse setting. As for the application of our work, the results are applicable to all log-concave distributions which by definition are distributions admitting a log-concave density f i.e. a density of the form f = eΨ with Ψ any concave function. This covers many practically used distributions including extreme value distributions. 3 Relationship between Gaussian and Exponential Widths In this section we introduce a complexity parameter of a set we(·), which we call the exponential width of the set, and establish a sharp upper bound for it in terms of the Gaussian width of the set wg(·). In particular, we prove the inequality: we(A) ≤c · wg(A)√log p for some fixed constant c. To see the connection with the rest of the paper, remember that our subsequent results for λn and κ are expressed in terms of the Gaussian width and exponential width of specific sets associated with the norm. With this result, we establish precise sample complexity bounds by leveraging a body of literature on the computation of Gaussian widths for various structured sets [7, 20]. We note that while the exponential width has been defined and used earlier, see for e.g. [19, 15], to the best of our knowledge this is the first result establishing the relation between the Gaussian and exponential widths of sets. Our result relies on generic chaining [19]. 3 3.1 Generic Chaining, Gaussian Width and Exponential Widths Consider a process {Xt}t∈T = ⟨h, t⟩indexed by a set T ⊆Rp, where each element hi has mean 0. It follows from the definition that the process is centered, i.e., E(Xt) = 0, ∀t ∈T. We will also assume for convenience w.l.o.g that set T is finite. Also, for any s, t ∈T, consider a canonical distance metric d(s, t). We are interested in computing the quantity E supt∈T Xt. Now, for reasons detailed in the supplement, consider that we split T into a sequence of subsets T0 ⊆T1 ⊆. . . ⊆T, with T0 = {t0}, |Tn| ≤22n for n ≥1 and Tm = T for some large m. Let function πn : T →Tn, defined as πn(t) = {s : d(s, t) ≤d(s1, t), ∀s, s1 ∈Tn}, maps each point t ∈T to some point s ∈Tn closest according to d. The set Tn and the associated function πn define a partition An of the set T. Each element of the partition An has some element s ∈Tn and all t ∈T closest to it according to the map πn. Also the size of the partition |An| ≤22n. An are called admissible sequences in generic chaining. Note that there are multiple admissible sequences corresponding to multiple ways of defining the sets T0, T1, . . . , Tm. We will denote by ∆(An(t)) the diameter of the element An(t) w.r.t distance metric d defined as ∆(An(t)) = sups,t∈An(t) d(s, t). Definition 1 γ-functionals: [19] Given α > 0, and a metric space (T, d) we define γα(T, d) = inf sup t X n≥0 2n/α∆(An(t)) , (3) where the inf is taken over all possible admissible sequences of the set T. Gaussian width: Let {Xt}t∈T = ⟨g, t⟩where each element gi is i.i.d N(0, 1). The quantity wg(T) = E supt∈T Xt is called the Gaussian width of the set T. Define the distance metric d2(s, t) = ∥s −t∥2. The relation between Gaussian width and the γ-functionals is seen from the following result from [Theorem 2.1.1] of [19] stated below: 1 Lγ2(T, d2) ≤wg(T) ≤Lγ2(T, d2) . (4) Note that, following [Theorem 2.1.5] in [19] any process which satisfies the concentration bound P(|Xs −Xt| ≥u) ≤2 exp − u2 d2(s,t)2 satisfies the upper bound in (4). Exponential width: Let {Xt}t∈T = ⟨e, t⟩where each element ei is is a centered i.i.d exponential random variable satisfying P(|ei| ≥u) = exp(−u). Define the distance metrics d2(s, t) = ∥s−t∥2 and d∞(s, t) = ∥s −t∥∞. The quantity we(T) = E supt∈T Xt is called the exponential width of the set T. By [Theorem 1.2.7] and [Theorem 5.2.7] in [19], for some universal constant L, we(T) satisfies: 1 L(γ2(T, d2) + γ1(T, d∞)) ≤we(T) ≤L(γ2(T, d2) + γ1(T, d∞)) (5) Note that any process which satisfies the sub-exponential concentration bound P(|Xs −Xt| ≥u) ≤ 2 exp −K min u2 d2(s,t)2 , u d∞(s,t) satisfies the upper bound in the above inequality [15, 19]. 3.2 An Upper Bound for the Exponential Width In this section we prove the following relationship between the exponential and Gaussian widths: Theorem 1 For any set T ⊂Rp, for some constant c the following holds: we(T) ≤c · wg(T) p log p . (6) Proof: The result depends on geometric results [Lemma 2.6.1] and [Theorem 2.6.2] in [19]. Theorem 2 [19] Consider a countable set T ⊂Rp, and a number u > 0. Assume that the Gaussian width is bounded i.e. S = γ2(T, d2) ≤∞. Then there is a decomposition T ⊂T1 + T2 where T1 + T2 = {t1 + t2 : t1 ∈T1, t2 ∈T2}, such that γ2(T1, d2) ≤LS , γ1(T1, d∞) ≤LSu (7) γ2(T2, d2) ≤LS , T2 ⊂LS u B1 , (8) where L is some universal constant and B1 is the unit ℓ1 norm ball in Rp. 4 We first examine the exponential widths of the sets T1 and T2. For the set T1: we(T1) ≤L[γ2(T1, d2) + γ1(T1, d∞)] ≤L[S + Su] = L(wg(T) + wg(T)u) , (9) where the first inequality follows from (5) and the second inequality follows from (7). We will need the following result on bounding the exponential width of an unit ℓ1-norm ball in p dimensions to compute the exponential width of T2. The proof, given in the supplement, is based on the fact supt∈B1⟨e, t⟩= ∥e∥∞and then using a simple union bound argument to bound ∥e∥∞. Lemma 1 Consider the set B1 = {t ∈Rp : ∥t∥1 ≤1}. Then for some universal constant L: we(B1) = E sup t∈B1 ⟨e, t⟩ ≤L log p . (10) The exponential width of T2 is: we(T2) = we((LS/u)B1) = (LS/u)we(B1) = (L/u)wg(T)we(B1) ≤(L/u)wg(T) log p . (11) The first equality follows from (8) as T2 is a subset of a (LS/u)-scaled ℓ1 norm ball, the second inequality follows from elementary properties of widths of sets and the last inequality follows from Lemma 1. Now as stated in Theorem 2, u in (9) and (11) is any number greater than 0. We choose u = √log p and noting that (1 + √log p) ≤L√log p for some constant L yields: we(T1) ≤Lwg(T) p log p, we(T2) ≤Lwg(T) p log p (12) The final step, following arguments as [Theorem 2.1.6] [19], is to bound exponential width of set T. we(T) = E[sup t∈T ⟨h, t⟩] ≤E[ sup t1∈T1 ⟨h, t1⟩] + E[ sup t2∈T2 ⟨h, t2⟩] ≤we(T1) + we(T2) ≤Lwg(T) p log p . This proves Theorem 1. 4 Recovery Bounds We obtain bounds on the error vector ˆ∆n = ˆθ −θ∗. If the regularization parameter λn ≥ βR∗( 1 nXT (y −Xθ∗)), β > 1 and the RE condition is satisfied on the error set A with RE constant κ, then [2, 16] obtain the following error bound w.h.p for some constant c: ∥ˆ∆n∥2 ≤c · λn κ Ψ(A) , (13) where Ψ(A) is the norm compatibility constant given by supu∈A(R(u)/∥u∥2). 4.1 Regularization Parameter As discussed earlier, for our analysis the regularization parameter should satisfy λn ≥ βR∗( 1 nXT (y −Xθ∗)), β > 1. Observe that for the linear model (1), ω = y −Xθ∗is the noise, implying that λn ≥βR∗( 1 nXT ω). With e denoting a sub-exponential random vector with i.i.d entries, E R∗ 1 nXT ω = E sup u∈ΩR ∥ω∥2 1 nXT ω ∥ω∥2 , u = 1 nE[∥ω∥2]E sup u∈ΩR ⟨e, u⟩ . (14) The first equality follows from the definition of dual norm. The second inequality follows from the fact that X and ω are independent of each other. Also by elementary arguments [21], e = XT (ω/|ω∥2) has i.i.d sub-exponential entries with sub-exponential norm bounded by supω∈Rn ∥⟨XT i , ω/∥ω∥2⟩∥ψ1. The above argument was first proposed for the sub-Gaussian case in [2]. For sub-exponential design and noise, the difference compared to the sub-Gaussian case is the dependence on the exponential width instead of the Gaussian width of the unit norm ball. Using known results on the Gaussian widths of unit ℓ1 and group-sparse norms, corollaries below are derived using the relationship between Gaussian and exponential widths derived in Section 3: 5 Corollary 1 If R(·) is the ℓ1 norm, for sub-exponential design matrix X and noise ω, E R∗ 1 nXT (y −Xθ∗) ≤η0 √n log p . (15) Corollary 2 If R(·) is the group-sparse norm, for sub-exponential design matrix X and noise ω, E R∗ 1 nXT (y −Xθ∗) ≤η0 √n p (m + log NG) log p . (16) 4.2 The RE condition For Gaussian and sub-Gaussian X, previous work has established RIP bounds of the form κ1 ≤ inf u∈A( 1 √n)∥Xu∥2 ≤sup u∈A ( 1 √n)∥Xu∥2 ≤κ2. In particular, RIP is satisfied w.h.p if the number of samples is of the order of square of the Gaussian width of the error set ,i.e., O(w2 g(A)), which we will call the sub-Gaussian RE sample complexity bound. As we move to heavier tails, establishing such two-sided bounds requires assumptions on the boundedness of the Euclidean norm of the rows of X [15, 17, 10]. On the other hand, analysis of only the lower bound requires very few assumptions on X. In particular, ∥Xu∥2 being the sum of random non-negative quantities the lower bound should be satisfied even with very weak moment assumptions on X. Making these observations, [10, 17] develop arguments obtaining sub-Gaussian RE sample complexity bounds when set A is the unit sphere Sp−1 even for design matrices having only bounded fourth moments. Note that with such weak moment assumptions, a non-trivial non-asymptotic upper bound cannot be established. Our analysis for the RE condition essentially follow this premise and arguments from [10]. 4.2.1 A Bound Based on Exponential Width We obtain a sample complexity bound which depends on the exponential width of the error set A. The result we state below follows along similar arguments made in [20], which in turn are based on arguments from [10, 14]. Theorem 3 Let X ∈Rn×p have independent isotropic sub-exponential rows. Let A ⊆Sp−1, 0 < ξ < 1, and c is a constant that depends on the sub-exponential norm K = supu∈A ∥|⟨X, u⟩|∥ψ1. Let we(A) denote the exponential width of the set. Then for some τ > 0 with probability atleast (1 −exp(−τ 2/2)), inf u∈A ∥Xu∥2 ≥cξ(1 −ξ2)2√n −4we(A) −ξτ . (17) Contrasting the result (17) with previous results for the sub-Gaussian case [2, 20] the dependence on wg(A) on the r.h.s is replaced by we(A), thus leading to a log p worse sample complexity bound. The corollary below applies the result for the ℓ1 norm. Note that results from [1] for ℓ1 norm show RIP bounds w.h.p for the same number of samples. Corollary 3 For an s-sparse θ∗and ℓ1 norm regularization, if n ≥c · s log2 p then with probability atleast (1 −exp(−τ 2/2)) and constants c, κ depending on ξ and τ, inf u∈A ∥Xu∥2 ≥κ . (18) 4.2.2 A Bound Based on VC-Dimensions In this section, we show a stronger sub-Gaussian RE sample complexity result for sub-exponential X and ℓ1, group-sparse regularization. The arguments follow along similar lines to [11, 10]. Theorem 4 Let X ∈Rn×p be a random matrix with isotropic random sub-exponential rows Xi ∈ Rp. Let A ⊆Sp−1, 0 < ξ < 1, c is a constant that depends on the sub-exponential norm K = supu∈A ∥|⟨X, u⟩|∥ψ1 and define β = c(1 −ξ2)2. Let we(A) denote the exponential width of the set 6 A. Let Cξ = {I[|⟨Xi, u⟩| > ξ], u ∈A} be a VC-class with VC-dimension V C(Cξ) ≤d. For some suitable constant c1, if n ≥c1(d/β2), then with probability atleast 1 −exp(−η0β2n): inf u∈A 1 √n∥Xu∥2 ≥cξ(1 −ξ2)2 2 . (19) Consider the case of ℓ1 norm. A consequence of the above result is that the RE condition is satisfied on the set B = {u|∥u∥0 = s1} ∩Sp−1 for some s1 ≥c · s where c is a constant that will depend on the RE constant κ when n is O(s1 log p). The argument follows from the fact that B ∩Sp−1 is a union of p s1 spheres. Thus the result is obtained by applying Theorem 4 to each sphere and using a union bound argument. The final step involves showing that the RE condition is satisfied on the error set A if it is satisfied on B using Maurey’s empirical approximation argument [17, 18, 11]. Corollary 4 For set A ⊆Sp−1, which is the error set for the ℓ1 norm, if n ≥c2s log(ep/s)/β2 for some suitable constant c2, then with probability atleast 1 −exp(−η0nβ2) − 1 wη1pη1−1 , where η0, η1, w > 1 are constants, the following result holds for κ depending on the constant ξ: inf u∈A 1 √n∥Xu∥2 ≥κ . (20) Essentially the same arguments for the group-sparse norm lead to the following result: Corollary 5 For set A ⊆Sp−1, which is the error set for the group-sparse norm, if n ≥(c(msG + sG log(eNG/sG)))/β2, then with probability atleast 1 −exp(−η0nβ2) − 1 wη1N η1−1 G mη1−1 where η0, η1, w > 1 are constants and κ depending on constant ξ, inf u∈A 1 √n∥Xu∥2 ≥κ . (21) 4.3 Recovery Bounds for ℓ1 and Group-Sparse Norms We combine result (13) with results obtained for λn and κ previously for ℓ1 and group-sparse norms. Corollary 6 For the ℓ1 norm, when n ≥cs log p for some constant c, with high probability: ∥ˆ∆n∥2 ≤O(√s log p/√n) . (22) Corollary 7 For the group-sparse norm, when n ≥c(msG + sG log(NG)), for some constant c, with high probability: ∥ˆ∆n∥2 ≤O r sG log p(m + log NG) n ! . (23) Both bounds are √log p worse compared to corresponding bounds for the sub-Gaussian case. In terms of sample complexity, n should scale as O(s log2 p), instead of O(s log p) for sub-Gaussian, for ℓ1 norm and O(sG log p(m + log NG)), instead of O(sG(m + log NG)) for the sub-Gaussian case, for group-sparse lasso to get upto a constant order error bound. 5 Experiments We perform experiments on synthetic data to compare estimation errors for Gaussian and subexponential design matrices and noise for both ℓ1 and group sparse norms. For ℓ1 we run experiments with dimensionality p = 300 and sparsity level s = 10. For group sparse norms we run experiments with dimensionality p = 300, max. group size m = 6, number of groups NG = 50 groups each of size 6 and 4 non-zero groups. For the design matrix X, for the Gaussian case we sample rows randomly from an isotropic Gaussian distribution, while for sub-exponential design 7 0 20 40 60 80 100 120 140 160 180 200 0 0.2 0.4 0.6 0.8 1 Number of samples Probability of success Basis pursuit with Gaussian design Basis pursuit with sub−exponential design Group sparse with Gaussian design Group sparse with sub−exponential design Figure 1: Probability of recovery in noiseless case with increasing sample size. There is a sharp phase transition and the curves overlap for Gaussian and subexponential designs. 60 80 100 120 140 160 180 200 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 Estimation error Lasso with Gaussian design and noise Lasso with sub−exponential design and noise 120 130 140 150 160 170 180 0.65 0.7 0.75 0.8 0.85 0.9 Number of samples Estimation error Group sparse lasso with Gaussian design and noise Group sparse lasso with sub−exponential design and noise Figure 2: Estimation error ∥ˆ∆n∥2 vs sample size for ℓ1 (left) and group-sparse norms (right). The curve for sub-exponential designs and noise decays slower than Gaussians. matrices we sample each row of X randomly from an isotropic extreme-value distribution. The number of samples n in X is incremented in steps of 10 with an initial starting value of 5. For the noise ω, it is sampled i.i.d from the Gaussian and extreme-value distributions with variance 1 for the Gaussian and sub-exponential cases respectively. For each sample size n, we repeat the procedure above 100 times and all results reported in the plots are average values over the 100 runs. We report two sets of results. Figure 1 shows percentage of success vs sample size for the noiseless case when y = Xθ∗. A success in the noiseless case denotes exact recovery which is possible when the RE condition is satisfied. Hence we expect the sample complexity for recovery to be order of square of Gaussian width for Gaussian and extreme-value distributions as validated by the plots in Figure 1. Figure 2 shows average estimation error vs number of samples for the noisy case when y = Xθ∗+ω. The noise is added only for runs in which exact recovery was possible in the noiseless case. For example when n = 5 we do not have any results in Figure 2 as even noiseless recovery is not possible. For each n, the estimation errors are average values over 100 runs. As seen in Figure 2, the error decay is slower for extreme-value distributions compared to the Gaussian case. 6 Conclusions This paper presents a unified framework for analysis of non-asymptotic error and structured recovery in norm regularized regression problems when the design matrix and noise are sub-exponential, essentially generalizing the corresponding analysis and results for the sub-Gaussian case. The main observation is that the dependence on Gaussian width is replaced by the exponential width of suitable sets associated with the norm. Together with the result on the relationship between exponential and Gaussian widths, previous analysis techniques essentially carry over to the sub-exponential case. We also show that a stronger result exists for the RE condition for the Lasso and group-lasso problems. As future work we will consider extending the stronger result for the RE condition for all norms. Acknowledgements: This work was supported by NSF grants IIS-1447566, IIS-1447574, IIS-1422557, CCF-1451986, CNS-1314560, IIS-0953274, IIS-1029711, and by NASA grant NNX12AQ39A. 8 References [1] R. Adamczak, A. E. Litvak, A. Pajor, and N. Tomczak-Jaegermann. Restricted isometry property of matrices with independent columns and neighborly polytopes by random sampling. Constructive Approximation, 34(1):61–88, 2011. [2] A. Banerjee, S. Chen, F. Fazayeli, and V. Sivakumar. Estimation with Norm Regularization. In NIPS, 2014. [3] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. Annals of Statistics, 37(4):1705–1732, 2009. [4] E. J. Candes, J. Romberg, and T. Tao. Robust Uncertainty Principles : Exact Signal Reconstruction from Highly Incomplete Frequency Information. IEEE Transactions on Information Theory, 52(2):489–509, 2006. [5] E. J. Candes and T. Tao. Decoding by Linear Programming. IEEE Transactions on Information Theory, 51(12):4203–4215, 2005. [6] E. J. Candes and T. Tao. The Dantzig selector : statistical estimation when p is much larger than n. Annals of Statistics, 35(6):2313–2351, 2007. [7] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The Convex Geometry of Linear Inverse Problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. [8] S. Chatterjee, S. Chen, and A. Banerjee. Generalized Dantzig Selector: Application to the k-support norm. In NIPS, 2014. [9] D. Hsu and S. Sabato. Heavy-tailed regression with a generalized median-of-means. In ICML, 2014. [10] V. Koltchinskii and S. Mendelson. Bounding the smallest singular value of a random matrix without concentration. arXiv:1312.3580, 2013. [11] G. Lecu´e and S. Mendelson. Sparse recovery under weak moment assumptions. arXiv:1401.2188, 2014. [12] M. Ledoux and M. Talagrand. Probability in Banach Spaces: Isoperimetry and Processes. Springer Berlin, 1991. [13] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional data. Annals of Statistics, 37(1):246–270, 2009. [14] S. Mendelson. Learning without concentration. Journal of the ACM, To appear, 2015. [15] S. Mendelson and G. Paouris. On generic chaining and the smallest singular value of random matrices with heavy tails. Journal of Functional Analysis, 262(9):3775–3811, 2012. [16] S. N. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A Unified Framework for HighDimensional Analysis of $M$-Estimators with Decomposable Regularizers. Statistical Science, 27(4):538–557, 2012. [17] R. I. Oliveira. The lower tail of random quadratic forms, with applications to ordinary least squares and restricted eigenvalue properties. arXiv:1312.2903, 2013. [18] M. Rudelson and S. Zhou. Reconstruction from anisotropic random measurements. IEEE Transaction on Information Theory, 59(6):3434–3447, 2013. [19] M. Talagrand. The Generic Chaining. Springer Berlin, 2005. [20] J. A. Tropp. Convex recovery of a structured signal from independent random linear measurements. In Sampling Theory - a Renaissance. To appear, 2015. [21] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y Eldar and G. Kutyniok, editors, Compressed Sensing, pages 210–268. Cambridge University Press, Cambridge, 2012. [22] M. J Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using L1 -constrained quadratic programmming ( Lasso ). IEEE Transaction on Information Theory, 55(5):2183–2201, 2009. [23] P. Zhao and B. Yu. On Model Selection Consistency of Lasso. Journal of Machine Learning Research, 7:2541–2563, 2006. 9 | 2015 | 381 |
5,904 | Fast Randomized Kernel Ridge Regression with Statistical Guarantees∗ Ahmed El Alaoui † Michael W. Mahoney ‡ † Electrical Engineering and Computer Sciences ‡ Statistics and International Computer Science Institute University of California, Berkeley, Berkeley, CA 94720. {elalaoui@eecs,mmahoney@stat}.berkeley.edu Abstract One approach to improving the running time of kernel-based methods is to build a small sketch of the kernel matrix and use it in lieu of the full matrix in the machine learning task of interest. Here, we describe a version of this approach that comes with running time guarantees as well as improved guarantees on its statistical performance. By extending the notion of statistical leverage scores to the setting of kernel ridge regression, we are able to identify a sampling distribution that reduces the size of the sketch (i.e., the required number of columns to be sampled) to the effective dimensionality of the problem. This latter quantity is often much smaller than previous bounds that depend on the maximal degrees of freedom. We give an empirical evidence supporting this fact. Our second contribution is to present a fast algorithm to quickly compute coarse approximations to these scores in time linear in the number of samples. More precisely, the running time of the algorithm is O(np2) with p only depending on the trace of the kernel matrix and the regularization parameter. This is obtained via a variant of squared length sampling that we adapt to the kernel setting. Lastly, we discuss how this new notion of the leverage of a data point captures a fine notion of the difficulty of the learning problem. 1 Introduction We consider the low-rank approximation of symmetric positive semi-definite (SPSD) matrices that arise in machine learning and data analysis, with an emphasis on obtaining good statistical guarantees. This is of interest primarily in connection with kernel-based machine learning methods. Recent work in this area has focused on one or the other of two very different perspectives: an algorithmic perspective, where the focus is on running time issues and worst-case quality-of-approximation guarantees, given a fixed input matrix; and a statistical perspective, where the goal is to obtain good inferential properties, under some hypothesized model, by using the low-rank approximation in place of the full kernel matrix. The recent results of Gittens and Mahoney [2] provide the strongest example of the former, and the recent results of Bach [3] are an excellent example of the latter. In this paper, we combine ideas from these two lines of work in order to obtain a fast randomized kernel method with statistical guarantees that are improved relative to the state-of-the-art. To understand our approach, recall that several papers have established the crucial importance— from the algorithmic perspective—of the statistical leverage scores, as they capture structural nonuniformities of the input matrix and they can be used to obtain very sharp worst-case approximation guarantees. See, e.g., work on CUR matrix decompositions [5, 6], work on the the fast approximation of the statistical leverage scores [7], and the recent review [8] for more details. Here, we ∗A technical report version of this conference paper is available at [1]. 1 simply note that, when restricted to an n × n SPSD matrix K and a rank parameter k, the statistical leverage scores relative to the best rank-k approximation to K, call them ℓi, for i ∈{1, . . . , n}, are the diagonal elements of the projection matrix onto the best rank-k approximation of K. That is, ℓi = diag(KkK† k)i, where Kk is the best rank k approximation of K and where K† k is the MoorePenrose inverse of Kk. The recent work by Gittens and Mahoney [2] showed that qualitatively improved worst-case bounds for the low-rank approximation of SPSD matrices could be obtained in one of two related ways: either compute (with the fast algorithm of [7]) approximations to the leverage scores, and use those approximations as an importance sampling distribution in a random sampling algorithm; or rotate (with a Gaussian-based or Hadamard-based random projection) to a random basis where those scores are uniformized, and sample randomly in that rotated basis. In this paper, we extend these ideas, and we show that—from the statistical perspective—we are able to obtain a low-rank approximation that comes with improved statistical guarantees by using a variant of this more traditional notion of statistical leverage. In particular, we improve the recent bounds of Bach [3], which provides the first known statistical convergence result when substituting the kernel matrix by its low-rank approximation. To understand the connection, recall that a key component of Bach’s approach is the quantity dmof = n∥diag( K(K + nλI)−1)∥∞, which he calls the maximal marginal degrees of freedom.1 Bach’s main result is that by constructing a lowrank approximation of the original kernel matrix by sampling uniformly at random p = O(dmof/ϵ) columns, i.e., performing the vanilla Nystr¨om method, and then by using this low-rank approximation in a prediction task, the statistical performance is within a factor of 1 + ϵ of the performance when the entire kernel matrix is used. Here, we show that this uniform sampling is suboptimal. We do so by sampling with respect to a coarse but quickly-computable approximation of a variant to the statistical leverage scores, given in Definition 1 below, and we show that we can obtain similar 1 + ϵ guarantees by sampling only O(deff/ϵ) columns, where deff = Tr(K(K + nλI)−1) < dmof. The quantity deff is called the effective dimensionality of the learning problem, and it can be interpreted as the implicit number of parameters in this nonparametric setting [9, 10]. We expect that our results and insights will be useful much more generally. As an example of this, we can directly compare the Nystr¨om sampling method to a related divide-and-conquer approach, thereby answering an open problem of Zhang et al. [9]. Recall that the Zhang et al. divide-andconquer method consists of dividing the dataset {(xi, yi)}n i=1 into m random partitions of equal size, computing estimators on each partition in parallel, and then averaging the estimators. They prove the minimax optimality of their estimator, although their multiplicative constants are suboptimal; and, in terms of the number of kernel evaluations, their method requires m × (n/m)2, with m in the order of n/d2 eff, which gives a total number of O(nd2 eff) evaluations. They noticed that the scaling of their estimator was not directly comparable to that of the Nystr¨om sampling method (which was proven to only require O(ndmof) evaluations, if the sampling is uniform [3]), and they left it as an open problem to determine which if either method is fundamentally better than the other. Using our Theorem 3, we are able to put both results on a common ground for comparison. Indeed, the estimator obtained by our non-uniform Nystr¨om sampling requires only O(ndeff) kernel evaluations (compared to O(nd2 eff) and O(ndmof)), and it obtains the same bound on the statistical predictive performance as in [3]. In this sense, our result combines “the best of both worlds,” by having the reduced sample complexity of [9] and the sharp approximation bound of [3]. 2 Preliminaries and notation Let {(xi, yi)}n i=1 be n pairs of points in X × Y, where X is the input space and Y is the response space. The kernel-based learning problem can be cast as the following minimization problem: min f∈F 1 n n X i=1 ℓ(yi, f(xi)) + λ 2 ∥f∥2 F, (1) where F is a reproducing kernel Hilbert space and ℓ: Y × Y →R is a loss function. We denote by k : X × X →R the positive definite kernel corresponding to F and by φ : X →F a corresponding feature map. That is, k(x, x′) = ⟨φ(x), φ(x′)⟩F for every x, x′ ∈X. The representer theorem [11, 12] allows us to reduce Problem (1) to a finite-dimensional optimization problem, in which 1We will refer to it as the maximal degrees of freedom. 2 case Problem (1) boils down to finding the vector α ∈Rn that solves min α∈Rn 1 n n X i=1 ℓ(yi, (Kα)i) + λ 2 α⊤Kα, (2) where Kij = k(xi, xj). We let UΣU ⊤be the eigenvalue decomposition of K, with Σ = Diag(σ1, · · · , σn), σ1 ≥· · · ≥σn ≥0, and U an orthogonal matrix. The underlying data model is yi = f ∗(xi) + σ2ξi i = 1, · · · , n with f ∗∈F, (xi)1≤i≤n a deterministic sequence and ξi are i.i.d. standard normal random variables. We consider ℓto be the squared loss, in which case we will be interested in the mean squared error as a measure of statistical risk: for any estimator ˆf, let R( ˆf) := 1 nEξ∥ˆf −f ∗∥2 2 (3) be the risk function of ˆf where Eξ denotes the expectation under the randomness induced by ξ. In this setting the problem is called Kernel Ridge Regression (KRR). The solution to Problem (2) is α = (K + nλI)−1y, and the estimate of f ∗at any training point xi is given by ˆf(xi) = (Kα)i. We will use ˆfK as a shorthand for the vector ( ˆf(xi))1≤i≤n ∈Rn when the matrix K is used as a kernel matrix. This notation will be used accordingly for other kernel matrices (e.g. ˆfL for a matrix L). Recall that the risk of the estimator ˆfK can then be decomposed into a bias and variance term: R( ˆfK) = 1 nEξ∥K(K + nλI)−1(f ∗+ σ2ξ) −f ∗∥2 2 = 1 n∥(K(K + nλI)−1 −I)f ∗∥2 2 + σ2 n Eξ∥K(K + nλI)−1ξ∥2 2 = nλ2∥(K + nλI)−1f ∗∥2 2 + σ2 n Tr(K2(K + nλI)−2) := bias(K)2 + variance(K). (4) Solving Problem (2), either by a direct method or by an optimization algorithm needs at least a quadratic and often cubic running time in n which is prohibitive in the large scale setting. The so-called Nytr¨om method approximates the solution to Problem (2) by substituting K with a lowrank approximation to K. In practice, this approximation is often not only fast to construct, but the resulting learning problem is also often easier to solve [13, 14, 15, 2]. The method operates as follows. A small number of columns K1, · · · , Kp are randomly sampled from K. If we let C = [K1, · · · , Kp] ∈Rn×p denote the matrix containing the sampled columns, W ∈Rp×p the overlap between C and C⊤in K, then the Nystr¨om approximation of K is the matrix L = CW †C⊤. More generally, if we let S ∈Rn×p be an arbitrary sketching matrix, i.e., a tall and skinny matrix that, when left-multiplied by K, produces a “sketch” of K that preserves some desirable properties, then the Nystr¨om approximation associated with S is L = KS(S⊤KS)†S⊤K. For instance, for random sampling algorithms, S would contain a non-zero entry at position (i, j) if the i-th column of K is chosen at the j-th trial of the sampling process. Alternatively, S could also be a random projection matrix; or S could be constructed with some other (perhaps deterministic) method, as long as it verifies some structural properties, depending on the application [8, 2, 6, 5]. We will focus in this paper on analyzing this approximation in the statistical prediction context related to the estimation of f ∗by solving Problem (2). We proceed by revisiting and improving upon prior results from three different areas. The first result (Theorem 1) is on the behavior of the bias of ˆfL, when L is constructed using a general sketching matrix S. This result underlies the statistical analysis of the Nystr¨om method. To see this, first, it is not hard to prove that L ⪯K in the sense of usual the order on the positive semi-definite cone. Second, one can prove that the variance is matrix-increasing, hence the variance will decrease when replacing K by L. On the other 3 hand, the bias (while not matrix monotone in general) can be proven to not increase too much when replacing K by L. This latter statement will be the main technical difficulty for obtaining a bound on R( ˆfL) (see Appendix A). A form of this result is due to Bach [3] in the case where S is a uniform sampling matrix. The second result (Theorem 2) is a concentration bound for approximating matrix multiplication when the rank-one components of the product are sampled non uniformly. This result is derived from the matrix Bernstein inequality, and yields a sharp quantification of the deviation of the approximation from the true product. The third result (Definition 1) is an extension of the definition of the leverage scores to the context of kernel ridge regression. Whereas the notion of leverage is established as an algorithmic tool in randomized linear algebra, we introduce a natural counterpart of it to this statistical setting. By combining these contributions, we are able to give a sharp statistical statement on the behavior of the Nystr¨om method if one is allowed to sample non uniformly. All the proofs are deferred to the appendix (or see [1]). 3 Revisiting prior work and new results 3.1 A structural result We begin by stating a “structural” result that upper-bounds the bias of the estimator constructed using the approximation L. This result is deterministic: it only depends on the properties of the input data, and holds for any sketching matrix S that satisfies certain conditions. This way the randomness of the construction of S is decoupled from the rest of the analysis. We highlight the fact that this view offers a possible way of improving the current results since a better construction of S -whether deterministic or random- satisfying the data-related conditions would immediately lead to down stream algorithmic and statistical improvements in this setting. Theorem 1. Let S ∈Rn×p be a sketching matrix and L the corresponding Nystr¨om approximation. For γ > 0, let Φ = Σ(Σ + nγI)−1. If the sketching matrix S satisfies λmax Φ − Φ1/2U ⊤SS⊤UΦ1/2 ≤t for t ∈(0, 1) and λ ≥ 1 1−t∥S∥2 op · λmax(K) n , where λmax denotes the maximum eigenvalue and ∥· ∥op is the operator norm then bias(L) ≤ 1 + γ/λ 1 −t bias(K). (5) In the special case where S contains one non zero entry equal to 1/√pn in every column with p the number of sampled columns, the result and its proof can be found in [3] (appendix B.2), although we believe that their argument contains a problematic statement. We propose an alternative and complete proof in Appendix A. The subsequent analysis unfolds in two steps: (1) assuming the sketching matrix S satisfies the conditions stated in Theorem 1, we will have R( ˆfL) ≲R( ˆfK), and (2) matrix concentration is used to show that an appropriate random construction of S satisfies the said conditions. We start by stating the concentration result that is the source of our improvement (section 3.2), define a notion of statistical leverage scores (section 3.3), and then state and prove the main statistical result (Theorem 3 section 3.4). We then present our main algorithmic result consisting of a fast approximation to this new notion of leverage scores (section 3.5). 3.2 A concentration bound on matrix multiplication Next, we state our result for approximating matrix products of the form ΨΨ⊤when a few columns from Ψ are sampled to form the approximate product ΨIΨ⊤ I where ΨI contains the chosen columns. The proof relies on a matrix Bernstein inequality (see e.g. [16]) and is presented at the end of the paper (Appendix B). Theorem 2. Let n, m be positive integers. Consider a matrix Ψ ∈Rn×m and denote by ψi the ith column of Ψ. Let p ≤m and I = {i1, · · · , ip} be a subset of {1, · · · , m} formed by p elements chosen randomly with replacement, according to the distribution ∀i ∈{1, · · · , m} Pr(choosing i) = pi ≥β ∥ψi∥2 2 ∥Ψ∥2 F (6) 4 for some β ∈(0, 1]. Let S ∈Rn×p be a sketching matrix such that Sij = 1/√p · pij only if i = ij and 0 elsewhere. Then Pr λmax ΨΨ⊤−ΨSS⊤Ψ⊤ ≥t ≤n exp −pt2/2 λmax(ΨΨ⊤)(∥Ψ∥2 F /β + t/3) . (7) Remarks: 1. This result will be used for Ψ = Φ1/2U ⊤, in conjunction with Theorem 1 to prove our main result in Theorem 3. Notice that Ψ⊤is a scaled version of the eigenvectors, with a scaling given by the diagonal matrix Φ = Σ(Σ + nγI)−1 which should be considered as “soft projection” matrix that smoothly selects the top part of the spectrum of K. The setting of Gittens et al. [2], in which Φ is a 0-1 diagonal is the closest analog of our setting. 2. It is known that pi = ∥ψi∥2 2 ∥Ψ∥2 F is the optimal sampling distribution in terms of minimizing the expected error E∥ΨΨ⊤−ΨSS⊤Ψ⊤∥2 F [17]. The above result exhibits a robustness property by allowing the chosen sampling distribution to be different from the optimal one by a factor β.2 The sub-optimality of such a distribution is reflected in the upper bound (7) by the amplification of the squared Frobenius norm of Ψ by a factor 1/β. For instance, if the sampling distribution is chosen to be uniform, i.e. pi = 1/m, then the value of β for which (6) is tight is ∥Ψ∥2 F m maxi ∥ψi∥2 2 , in which case we recover a concentration result proven by Bach [3]. Note that Theorem 2 is derived from one of the state-of-the-art bounds on matrix concentration, but it is one among many others in the literature; and while it constitutes the base of our improvement, it is possible that a concentration bound more tailored to the problem might yield sharper results. 3.3 An extended definition of leverage We introduce an extended notion of leverage scores that is specifically tailored to the ridge regression problem, and that we call the λ-ridge leverage scores. Definition 1. For λ > 0, the λ-ridge leverage scores associated with the kernel matrix K and the parameter λ are ∀i ∈{1, · · · , n}, li(λ) = n X j=1 σj σj + nλU 2 ij. (8) Note that li(λ) is the ith diagonal entry of K(K + nλI)−1. The quantities (li(λ))1≤i≤n are in this setting the analogs of the so-called leverage scores in the statistical literature, as they characterize the data points that “stick out”, and consequently that most affect the result of a statistical procedure. They are classically defined as the row norms of the left singular matrix U of the input matrix, and they have been used in regression diagnostics for outlier detection [18], and more recently in randomized matrix algorithms as they often provide an optimal importance sampling distribution for constructing random sketches for low rank approximation [17, 19, 5, 6, 2] and least squares regression [20] when the input matrix is tall and skinny (n ≥m). In the case where the input matrix is square, this definition is vacuous as the row norms of U are all equal to 1. Recently, Gittens and Mahoney [2] used a truncated version of these scores (that they called leverage scores relative to the best rank-k space) to obtain the best algorithmic results known to date on low rank approximation of positive semi-definite matrices. Definition 1 is a weighted version of the classical leverage scores, where the weights depend on the spectrum of K and a regularization parameter λ. In this sense, it is an interpolation between Gittens’ scores and the classical (tall-and-skinny) leverage scores, where the parameter λ plays the role of a rank parameter. In addition, we point out that Bach’s maximal degrees of freedom dmof is to the λ-ridge leverage scores what the coherence is to Gittens’ leverage scores, i.e. their (scaled) maximum value: dmof/n = maxi li(λ); and that while the sum of Gittens’ scores is the rank parameter k, the sum of the λ-ridge leverage scores is the effective dimensionality deff. We argue in the following that Definition 1 provides a relevant notion of leverage in the context of kernel ridge regression. It is the natural counterpart of the algorithmic notion of leverage in the prediction context. We use it in the next section to make a statistical statement on the performance of the Nystr¨om method. 2In their work [17], Drineas et al. have a comparable robust statement for controlling the expected error. Our result is a robust quantification of the tail probability of the error, which is a much stronger statement. 5 3.4 Main statistical result: an error bound on approximate kernel ridge regression Now we are able to give an improved version of a theorem by Bach [3] that establishes a performance guaranty on the use of the Nystr¨om method in the context of kernel ridge regression. It is improved in the sense that the sufficient number of columns that should be sampled in order to incur no (or little) loss in the prediction performance is lower. This is due to a more data-sensitive way of sampling the columns of K (depending on the λ-ridge leverage scores) during the construction of the approximation L. The proof is in Appendix C. Theorem 3. Let λ, ϵ > 0, ρ ∈(0, 1/2), n ≥2 and L be a Nystr¨om approximation of K by choosing p columns randomly with replacement according to a probability distribution (pi)1≤i≤n such that ∀i ∈{1, · · · , n}, pi ≥β · li(λϵ)/ Pn i=1 li(λϵ) for some β ∈(0, 1]. Let l ≤mini li(λϵ). If p ≥8 deff β + 1 6 log n ρ and λ ≥2 1 + 1 l λmax(K) n , with deff = Pn i=1 li(λϵ) = Tr(K(K + nλϵI)−1) then R( ˆfL) ≤(1 + 2ϵ)2R( ˆfK) with probability at least 1 −2ρ, where (li)i are introduced in Definition 1 and R is defined in (3). Theorem 3 asserts that substituting the kernel matrix K by a Nystr¨om approximation of rank p in the KRR problem induces an arbitrarily small prediction loss, provided that p scales linearly with the effective dimensionality deff3 and that λ is not too small4. The leverage-based sampling appears to be crucial for obtaining this dependence, as the λ-ridge leverage scores provide information on which columns -and hence which data points- capture most of the difficulty of the estimation problem. Also, as a sanity check, the smaller the target accuracy ϵ, the higher deff, and the more uniform the sampling distribution (li(λϵ))i becomes. In the limit ϵ →0, p is in the order of n and the scores are uniform, and the method is essentially equivalent to using the entire matrix K. Moreover, if the sampling distribution (pi)i is a factor β away from optimal, a slight oversampling (i.e. increase p by 1/β) achieves the same performance. In this sense, the above result shows robustness to the sampling distribution. This property is very beneficial from an implementation point of view, as the error bounds still hold when only an approximation of the leverage scores is available. If the columns are sampled uniformly, a worse lower bound on p that depends on dmof is obtained [3]. 3.5 Main algorithmic result: a fast approximation to the λ-ridge leverage scores Although the λ-ridge leverage scores can be naively computed using SVD, the exact computation is as costly as solving the original Problem (2). Therefore, the central role they play in the above result motivates the problem of a fast approximation, in a similar way the importance of the usual leverage scores has motivated Drineas et al. to approximate them is random projection time [7]. A success in this task will allow us to combine the running time benefits with the improved statistical guarantees we have provided. Algorithm: • Inputs: data points (xi)1≤i≤n, probability vector (pi)1≤i≤n, sampling parameter p ∈ {1, 2, · · · }, λ > 0, ϵ ∈(0, 1/2). • Output: (˜li)1≤i≤n ϵ-approximations to (li(λ))1≤i≤n. 1. Sample p data points from (xi)1≤i≤n with replacement with probabilities (pi)1≤i≤n. 2. Compute the corresponding columns K1, · · · , Kp of the kernel matrix. 3. Construct C = [K1, · · · , Kp] ∈Rn×p and W ∈Rp×p as presented in Section 2. 4. Construct B ∈Rn×p such that BB⊤= CW †C⊤. 5. For every i ∈{1, · · · , n}, set ˜li = B⊤ i (B⊤B + nλI)−1Bi (9) where Bi is the i-th row of B, and return it. 3Note that deff depends on the precision parameter ϵ, which is absent in the classical definition of the effective dimensionality [10, 9, 3] However, the following bound holds: deff ≤1 ϵ Tr(K(K + nλI)−1). 4This condition on λ is not necessary if one constructs L as KS(S⊤KS + nλϵI)−1S⊤K (see proof). 6 Running time: The running time of the above algorithm is dominated by steps 4 and 5. Indeed, constructing B can be done using a Cholesky factorization on W and then a multiplication of C by the inverse of the obtained Cholesky factor, which yields a running time of O(p3+np2). Computing the approximate leverage scores (˜li)1≤i≤n in step 5 also runs in O(p3 + np2). Thus, for p ≪n, the overall algorithm runs in O(np2). Note that formula (9) only involves matrices and vectors of size p (everything is computed in the smaller dimension p), and the fact that this yields a correct approximation relies on the matrix inversion lemma (see proof in Appendix D). Also, only the relevant columns of K are computed and we never have to form the entire kernel matrix. This improves over earlier models [2] that require that all of K has to be written down in memory. The improved running time is obtained by considering the construction (9) which is quite different from the regular setting of approximating the leverage scores of a rectangular matrix [7]. We now give both additive and multiplicative error bounds on its approximation quality. Theorem 4. Let ϵ ∈(0, 1/2), ρ ∈(0, 1) and λ > 0. Let L be a Nystr¨om approximation of K by choosing p columns at random with probabilities pi = Kii/Tr(K), i = 1, · · · , n. If p ≥8 Tr(K) nλϵ + 1 6 log n ρ then we have ∀i ∈{1, · · · , n} (additive error bound) li(λ) −2ϵ ≤˜li ≤li(λ) and (multiplicative error bound) σn −nλϵ σn + nλϵ li(λ) ≤˜li ≤li(λ) with probability at least 1 −ρ. Remarks: 1. Theorem 4 states that if the columns of K are sampled proportionally to Kii then O( Tr(K) nλ ) is a sufficient number of samples. Recall that Kii = ∥φ(xi)∥2 F, so our procedure is akin to sampling according to the squared lengths of the data vectors, which has been extensively used in different contexts of randomized matrix approximation [21, 17, 19, 8, 2]. 2. Due to how λ is defined in eq. (1) the n in the denominator is artificial: nλ should be thought of as a “rescaled” regularization parameter λ′. In some settings, the λ that yields the best generalization error scales like O(1/√n), hence p = O(Tr(K)/√n) is sufficient. On the other hand, if the columns are sampled uniformly, one would get p = O(dmof) = O(n maxi li(λ)). 4 Experiments We test our results based on several datasets: one synthetic regression problem from [3] to illustrate the importance of the λ-ridge leverage scores, the Pumadyn family consisting of three datasets pumadyn-32fm, pumadyn-32fh and pumadyn-32nh 5 and the Gas Sensor Array Drift Dataset from the UCI database6. The synthetic case consists of a regression problem on the interval X = [0, 1] where, given a sequence (xi)1≤i≤n and a sequence of noise (ϵi)1≤i≤n, we observe the sequence yi = f(xi) + σ2ϵi, i ∈{1, · · · , n}. The function f belongs to the RKHS F generated by the kernel k(x, y) = 1 (2β)!B2β(x−y−⌊x−y⌋) where B2β is the 2β-th Bernoulli polynomial [3]. One important feature of this regression problem is the distribution of the points (xi)1≤i≤n on the interval X: if they are spread uniformly over the interval, the λ-ridge leverage scores (li(λ))1≤i≤n are uniform for every λ > 0, and uniform column sampling is optimal in this case. In fact, if xi = i−1 n for i = 1, · · · , n, the kernel matrix K is a circulant matrix [3], in which case, we can prove that the λ-ridge leverage scores are constant. Otherwise, if the data points are distributed asymmetrically on the interval, the λ-ridge leverage scores are non uniform, and importance sampling is beneficial (Figure 1). In this experiment, the data points xi ∈(0, 1) have been generated with a distribution symmetric about 1 2, having a high density on the borders of the interval (0, 1) and a low density on the center of the interval. The number of observations is n = 500. On Figure 1, we can see that there are few data points with 5http://www.cs.toronto.edu/˜delve/data/pumadyn/desc.html 6https://archive.ics.uci.edu/ml/datasets/Gas+Sensor+Array+Drift+Dataset 7 Figure 1: The λ-ridge leverage scores for the synthetic Bernoulli data set described in the text (left) and the MSE risk vs. the number of sampled columns used to construct the Nystr¨om approximation for different sampling methods (right). high leverage, and those correspond to the region that is underrepresented in the data sample (i.e. the region close to the center of the interval since it is the one that has the lowest density of observations). The λ-ridge leverage scores are able to capture the importance of these data points, thus providing a way to detect them (e.g. with an analysis of outliers), had we not known their existence. For all datasets, we determine λ and the band width of k by cross validation, and we compute the effective dimensionality deff and the maximal degrees of freedom dmof. Table 1 summarizes the experiments. It is often the case that deff ≪dmof and R( ˆfL)/R( ˆfK) ≃1, in agreement with Theorem 3. kernel dataset n nb. feat band width λ deff dmof risk ratio R( ˆfL)/R( ˆfK) Bern Synth 500 1e−6 24 500 1.01 (p = 2deff) Linear Gas2 1244 128 1e−3 126 1244 1.10 (p = 2deff) Gas3 1586 128 1e−3 125 1586 1.09 (p = 2deff) Pum-32fm 2000 32 1e−3 31 2000 0.99 (p = 2deff) Pum-32fh 2000 32 1e−3 31 2000 0.99 (p = 2deff) Pum-32nh 2000 32 1e−3 32 2000 0.99 (p = 2deff) RBF Gas2 1244 1 4.5e−4 1135 1244 1.56 (p = deff) Gas3 1586 1 5e−4 1450 1586 1.50 (p = deff) Pum-32fm 2000 5 0.5 142 1897 1.00 (p = deff) Pum-32fh 2000 5 5e−2 747 1989 1.00 (p = deff) Pum-32nh 2000 5 1.3e−2 1337 1997 0.99 (p = deff) Table 1: Parameters and quantities of interest for the different datasets and using different kernels: the synthetic dataset using the Bernoulli kernel (denoted by Synth), the Gas Sensor Array Drift Dataset (batches 2 and 3, denoted by Gas2 and Gas3) and the Pumadyn datasets (Pum-32fm, Pum-32fh, Pum-32nh) using linear and RBF kernels. 5 Conclusion We showed in this paper that in the case of kernel ridge regression, the sampling complexity of the Nystr¨om method can be reduced to the effective dimensionality of the problem, hence bridging and improving upon different previous attempts that established weaker forms of this result. This was achieved by defining a natural analog to the notion of leverage scores in this statistical context, and using it as a column sampling distribution. We obtained this result by combining and improving upon results that have emerged from two different perspectives on low rank matrix approximation. We also present a way to approximate these scores that is computationally tractable, i.e. runs in time O(np2) with p depending only on the trace of the kernel matrix and the regularization parameter. One natural unanswered question is whether it is possible to further reduce the sampling complexity, or is the effective dimensionality also a lower bound on p? And as pointed out by previous work [22, 3], it is likely that the same results hold for smooth losses beyond the squared loss (e.g. logistic regression). However the situation is unclear for non-smooth losses (e.g. support vector regression). Acknowledgements: We thank Xixian Chen for pointing out a mistake in an earlier draft of this paper [1]. We thank Francis Bach for stimulating discussions and for contributing to a rectified proof of Theorem 1. We thank Jason Lee and Aaditya Ramdas for fruitful discussions regarding the proof of Theorem 1. We thank Yuchen Zhang for pointing out the connection to his work. 8 References [1] Ahmed El Alaoui and Michael W Mahoney. Fast randomized kernel methods with statistical guarantees. arXiv preprint arXiv:1411.0306, 2014. [2] Alex Gittens and Michael W Mahoney. Revisiting the Nystr¨om method for improved largescale machine learning. In Proceedings of The 30th International Conference on Machine Learning, pages 567–575, 2013. [3] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In Proceedings of The 26th Conference on Learning Theory, pages 185–209, 2013. [4] Francis Bach. Personal communication, October 2015. [5] Petros Drineas, Michael W Mahoney, and S Muthukrishnan. Relative-error CUR matrix decompositions. SIAM Journal on Matrix Analysis and Applications, 30(2):844–881, 2008. [6] Michael W Mahoney and Petros Drineas. CUR matrix decompositions for improved data analysis. Proceedings of the National Academy of Sciences, 106(3):697–702, 2009. [7] Petros Drineas, Malik Magdon-Ismail, Michael W Mahoney, and David P Woodruff. Fast approximation of matrix coherence and statistical leverage. The Journal of Machine Learning Research, 13(1):3475–3506, 2012. [8] Michael W Mahoney. Randomized algorithms for matrices and data. Foundations and Trends in Machine Learning, 3(2):123–224, 2011. [9] Yuchen Zhang, John Duchi, and Martin Wainwright. Divide and conquer kernel ridge regression. In Proceedings of The 26th Conference on Learning Theory, pages 592–617, 2013. [10] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning, volume 1. Springer series in statistics Springer, Berlin, 2001. [11] George Kimeldorf and Grace Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33(1):82–95, 1971. [12] Bernhard Sch¨olkopf, Ralf Herbrich, and Alex J Smola. A generalized representer theorem. In Computational Learning Theory, pages 416–426. Springer, 2001. [13] Shai Fine and Katya Scheinberg. Efficient SVM training using low-rank kernel representations. The Journal of Machine Learning Research, 2:243–264, 2002. [14] Christopher Williams and Matthias Seeger. Using the Nystr¨om method to speed up kernel machines. In Proceedings of the 14th Annual Conference on Neural Information Processing Systems, pages 682–688, 2001. [15] Sanjiv Kumar, Mehryar Mohri, and Ameet Talwalkar. Sampling techniques for the Nystr¨om method. In International Conference on Artificial Intelligence and Statistics, pages 304–311, 2009. [16] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. [17] Petros Drineas, Ravi Kannan, and Michael W Mahoney. Fast monte carlo algorithms for matrices I: Approximating matrix multiplication. SIAM Journal on Computing, 36(1):132– 157, 2006. [18] Samprit Chatterjee and Ali S Hadi. Influential observations, high leverage points, and outliers in linear regression. Statistical Science, pages 379–393, 1986. [19] Petros Drineas, Ravi Kannan, and Michael W Mahoney. Fast monte carlo algorithms for matrices II: Computing a low-rank approximation to a matrix. SIAM Journal on Computing, 36(1):158–183, 2006. [20] Petros Drineas, Michael W Mahoney, S Muthukrishnan, and Tam´as Sarl´os. Faster least squares approximation. Numerische Mathematik, 117(2):219–249, 2011. [21] Alan Frieze, Ravi Kannan, and Santosh Vempala. Fast monte-carlo algorithms for finding low-rank approximations. Journal of the ACM (JACM), 51(6):1025–1041, 2004. [22] Francis Bach. Self-concordant analysis for logistic regression. Electronic Journal of Statistics, 4:384–414, 2010. 9 | 2015 | 382 |
5,905 | Skip-Thought Vectors Ryan Kiros 1, Yukun Zhu 1, Ruslan Salakhutdinov 1,2, Richard S. Zemel 1,2 Antonio Torralba 3, Raquel Urtasun 1, Sanja Fidler 1 University of Toronto 1 Canadian Institute for Advanced Research 2 Massachusetts Institute of Technology 3 Abstract We describe an approach for unsupervised learning of a generic, distributed sentence encoder. Using the continuity of text from books, we train an encoderdecoder model that tries to reconstruct the surrounding sentences of an encoded passage. Sentences that share semantic and syntactic properties are thus mapped to similar vector representations. We next introduce a simple vocabulary expansion method to encode words that were not seen as part of training, allowing us to expand our vocabulary to a million words. After training our model, we extract and evaluate our vectors with linear models on 8 tasks: semantic relatedness, paraphrase detection, image-sentence ranking, question-type classification and 4 benchmark sentiment and subjectivity datasets. The end result is an off-the-shelf encoder that can produce highly generic sentence representations that are robust and perform well in practice. 1 Introduction Developing learning algorithms for distributed compositional semantics of words has been a longstanding open problem at the intersection of language understanding and machine learning. In recent years, several approaches have been developed for learning composition operators that map word vectors to sentence vectors including recursive networks [1], recurrent networks [2], convolutional networks [3, 4] and recursive-convolutional methods [5, 6] among others. All of these methods produce sentence representations that are passed to a supervised task and depend on a class label in order to backpropagate through the composition weights. Consequently, these methods learn highquality sentence representations but are tuned only for their respective task. The paragraph vector of [7] is an alternative to the above models in that it can learn unsupervised sentence representations by introducing a distributed sentence indicator as part of a neural language model. The downside is at test time, inference needs to be performed to compute a new vector. In this paper we abstract away from the composition methods themselves and consider an alternative loss function that can be applied with any composition operator. We consider the following question: is there a task and a corresponding loss that will allow us to learn highly generic sentence representations? We give evidence for this by proposing a model for learning high-quality sentence vectors without a particular supervised task in mind. Using word vector learning as inspiration, we propose an objective function that abstracts the skip-gram model of [8] to the sentence level. That is, instead of using a word to predict its surrounding context, we instead encode a sentence to predict the sentences around it. Thus, any composition operator can be substituted as a sentence encoder and only the objective function becomes modified. Figure 1 illustrates the model. We call our model skip-thoughts and vectors induced by our model are called skip-thought vectors. Our model depends on having a training corpus of contiguous text. We chose to use a large collection of novels, namely the BookCorpus dataset [9] for training our models. These are free books written by yet unpublished authors. The dataset has books in 16 different genres, e.g., Romance (2,865 books), Fantasy (1,479), Science fiction (786), Teen (430), etc. Table 1 highlights the summary statistics of the book corpus. Along with narratives, books contain dialogue, emotion and a wide range of interaction between characters. Furthermore, with a large enough collection the training set is not biased towards any particular domain or application. Table 2 shows nearest neighbours 1 Figure 1: The skip-thoughts model. Given a tuple (si−1, si, si+1) of contiguous sentences, with si the i-th sentence of a book, the sentence si is encoded and tries to reconstruct the previous sentence si−1 and next sentence si+1. In this example, the input is the sentence triplet I got back home. I could see the cat on the steps. This was strange. Unattached arrows are connected to the encoder output. Colors indicate which components share parameters. ⟨eos⟩is the end of sentence token. # of books # of sentences # of words # of unique words mean # of words per sentence 11,038 74,004,228 984,846,357 1,316,420 13 Table 1: Summary statistics of the BookCorpus dataset [9]. We use this corpus to training our model. of sentences from a model trained on the BookCorpus dataset. These results show that skip-thought vectors learn to accurately capture semantics and syntax of the sentences they encode. We evaluate our vectors in a newly proposed setting: after learning skip-thoughts, freeze the model and use the encoder as a generic feature extractor for arbitrary tasks. In our experiments we consider 8 tasks: semantic-relatedness, paraphrase detection, image-sentence ranking and 5 standard classification benchmarks. In these experiments, we extract skip-thought vectors and train linear models to evaluate the representations directly, without any additional fine-tuning. As it turns out, skip-thoughts yield generic representations that perform robustly across all tasks considered. One difficulty that arises with such an experimental setup is being able to construct a large enough word vocabulary to encode arbitrary sentences. For example, a sentence from a Wikipedia article might contain nouns that are highly unlikely to appear in our book vocabulary. We solve this problem by learning a mapping that transfers word representations from one model to another. Using pretrained word2vec representations learned with a continuous bag-of-words model [8], we learn a linear mapping from a word in word2vec space to a word in the encoder’s vocabulary space. The mapping is learned using all words that are shared between vocabularies. After training, any word that appears in word2vec can then get a vector in the encoder word embedding space. 2 Approach 2.1 Inducing skip-thought vectors We treat skip-thoughts in the framework of encoder-decoder models 1. That is, an encoder maps words to a sentence vector and a decoder is used to generate the surrounding sentences. Encoderdecoder models have gained a lot of traction for neural machine translation. In this setting, an encoder is used to map e.g. an English sentence into a vector. The decoder then conditions on this vector to generate a translation for the source English sentence. Several choices of encoder-decoder pairs have been explored, including ConvNet-RNN [10], RNN-RNN [11] and LSTM-LSTM [12]. The source sentence representation can also dynamically change through the use of an attention mechanism [13] to take into account only the relevant words for translation at any given time. In our model, we use an RNN encoder with GRU [14] activations and an RNN decoder with a conditional GRU. This model combination is nearly identical to the RNN encoder-decoder of [11] used in neural machine translation. GRU has been shown to perform as well as LSTM [2] on sequence modelling tasks [14] while being conceptually simpler. GRU units have only 2 gates and do not require the use of a cell. While we use RNNs for our model, any encoder and decoder can be used so long as we can backpropagate through it. Assume we are given a sentence tuple (si−1, si, si+1). Let wt i denote the t-th word for sentence si and let xt i denote its word embedding. We describe the model in three parts: the encoder, decoder and objective function. Encoder. Let w1 i , . . . , wN i be the words in sentence si where N is the number of words in the sentence. At each time step, the encoder produces a hidden state ht i which can be interpreted as the representation of the sequence w1 i , . . . , wt i. The hidden state hN i thus represents the full sentence. 1A preliminary version of our model was developed in the context of a computer vision application [9]. 2 Query and nearest sentence he ran his hand inside his coat , double-checking that the unopened letter was still there . he slipped his hand between his coat and his shirt , where the folded copies lay in a brown envelope . im sure youll have a glamorous evening , she said , giving an exaggerated wink . im really glad you came to the party tonight , he said , turning to her . although she could tell he had n’t been too invested in any of their other chitchat , he seemed genuinely curious about this . although he had n’t been following her career with a microscope , he ’d definitely taken notice of her appearances . an annoying buzz started to ring in my ears , becoming louder and louder as my vision began to swim . a weighty pressure landed on my lungs and my vision blurred at the edges , threatening my consciousness altogether . if he had a weapon , he could maybe take out their last imp , and then beat up errol and vanessa . if he could ram them from behind , send them sailing over the far side of the levee , he had a chance of stopping them . then , with a stroke of luck , they saw the pair head together towards the portaloos . then , from out back of the house , they heard a horse scream probably in answer to a pair of sharp spurs digging deep into its flanks . “ i ’ll take care of it , ” goodman said , taking the phonebook . “ i ’ll do that , ” julia said , coming in . he finished rolling up scrolls and , placing them to one side , began the more urgent task of finding ale and tankards . he righted the table , set the candle on a piece of broken plate , and reached for his flint , steel , and tinder . Table 2: In each example, the first sentence is a query while the second sentence is its nearest neighbour. Nearest neighbours were scored by cosine similarity from a random sample of 500,000 sentences from our corpus. To encode a sentence, we iterate the following sequence of equations (dropping the subscript i): rt = σ(Wrxt + Urht−1) (1) zt = σ(Wzxt + Uzht−1) (2) ¯ht = tanh(Wxt + U(rt ⊙ht−1)) (3) ht = (1 −zt) ⊙ht−1 + zt ⊙¯ht (4) where ¯ht is the proposed state update at time t, zt is the update gate, rt is the reset gate (⊙) denotes a component-wise product. Both update gates takes values between zero and one. Decoder. The decoder is a neural language model which conditions on the encoder output hi. The computation is similar to that of the encoder except we introduce matrices Cz, Cr and C that are used to bias the update gate, reset gate and hidden state computation by the sentence vector. One decoder is used for the next sentence si+1 while a second decoder is used for the previous sentence si−1. Separate parameters are used for each decoder with the exception of the vocabulary matrix V, which is the weight matrix connecting the decoder’s hidden state for computing a distribution over words. In what follows we describe the decoder for the next sentence si+1 although an analogous computation is used for the previous sentence si−1. Let ht i+1 denote the hidden state of the decoder at time t. Decoding involves iterating through the following sequence of equations (dropping the subscript i + 1): rt = σ(Wd rxt−1 + Ud rht−1 + Crhi) (5) zt = σ(Wd zxt−1 + Ud zht−1 + Czhi) (6) ¯ht = tanh(Wdxt−1 + Ud(rt ⊙ht−1) + Chi) (7) ht i+1 = (1 −zt) ⊙ht−1 + zt ⊙¯ht (8) Given ht i+1, the probability of word wt i+1 given the previous t −1 words and the encoder vector is P(wt i+1|w<t i+1, hi) ∝exp(vwt i+1ht i+1) (9) where vwt i+1 denotes the row of V corresponding to the word of wt i+1. An analogous computation is performed for the previous sentence si−1. Objective. Given a tuple (si−1, si, si+1), the objective optimized is the sum of the log-probabilities for the forward and backward sentences conditioned on the encoder representation: X t logP(wt i+1|w<t i+1, hi) + X t logP(wt i−1|w<t i−1, hi) (10) The total objective is the above summed over all such training tuples. 3 2.2 Vocabulary expansion We now describe how to expand our encoder’s vocabulary to words it has not seen during training. Suppose we have a model that was trained to induce word representations, such as word2vec. Let Vw2v denote the word embedding space of these word representations and let Vrnn denote the RNN word embedding space. We assume the vocabulary of Vw2v is much larger than that of Vrnn. Our goal is to construct a mapping f : Vw2v →Vrnn parameterized by a matrix W such that v′ = Wv for v ∈Vw2v and v′ ∈Vrnn. Inspired by [15], which learned linear mappings between translation word spaces, we solve an un-regularized L2 linear regression loss for the matrix W. Thus, any word from Vw2v can now be mapped into Vrnn for encoding sentences. 3 Experiments In our experiments, we evaluate the capability of our encoder as a generic feature extractor after training on the BookCorpus dataset. Our experimentation setup on each task is as follows: • Using the learned encoder as a feature extractor, extract skip-thought vectors for all sentences. • If the task involves computing scores between pairs of sentences, compute component-wise features between pairs. This is described in more detail specifically for each experiment. • Train a linear classifier on top of the extracted features, with no additional fine-tuning or backpropagation through the skip-thoughts model. We restrict ourselves to linear classifiers for two reasons. The first is to directly evaluate the representation quality of the computed vectors. It is possible that additional performance gains can be made throughout our experiments with non-linear models but this falls out of scope of our goal. Furthermore, it allows us to better analyze the strengths and weaknesses of the learned representations. The second reason is that reproducibility now becomes very straightforward. 3.1 Details of training To induce skip-thought vectors, we train two separate models on our book corpus. One is a unidirectional encoder with 2400 dimensions, which we subsequently refer to as uni-skip. The other is a bidirectional model with forward and backward encoders of 1200 dimensions each. This model contains two encoders with different parameters: one encoder is given the sentence in correct order, while the other is given the sentence in reverse. The outputs are then concatenated to form a 2400 dimensional vector. We refer to this model as bi-skip. For training, we initialize all recurrent matricies with orthogonal initialization [16]. Non-recurrent weights are initialized from a uniform distribution in [-0.1,0.1]. Mini-batches of size 128 are used and gradients are clipped if the norm of the parameter vector exceeds 10. We used the Adam algorithm [17] for optimization. Both models were trained for roughly two weeks. As an additional experiment, we also report experimental results using a combined model, consisting of the concatenation of the vectors from uni-skip and bi-skip, resulting in a 4800 dimensional vector. We refer to this model throughout as combine-skip. After our models are trained, we then employ vocabulary expansion to map word embeddings into the RNN encoder space. The publically available CBOW word vectors are used for this purpose 2. The skip-thought models are trained with a vocabulary size of 20,000 words. After removing multiple word examples from the CBOW model, this results in a vocabulary size of 930,911 words. Thus even though our skip-thoughts model was trained with only 20,000 words, after vocabulary expansion we can now successfully encode 930,911 possible words. Since our goal is to evaluate skip-thoughts as a general feature extractor, we keep text pre-processing to a minimum. When encoding new sentences, no additional preprocessing is done other than basic tokenization. This is done to test the robustness of our vectors. As an additional baseline, we also consider the mean of the word vectors learned from the uni-skip model. We refer to this baseline as bow. This is to determine the effectiveness of a standard baseline trained on the BookCorpus. 3.2 Semantic relatedness Our first experiment is on the SemEval 2014 Task 1: semantic relatedness SICK dataset [30]. Given two sentences, our goal is to produce a score of how semantically related these sentences are, based on human generated scores. Each score is the average of 10 different human annotators. Scores take values between 1 and 5. A score of 1 indicates that the sentence pair is not at all related, while 2http://code.google.com/p/word2vec/ 4 Method r ρ MSE Illinois-LH [18] 0.7993 0.7538 0.3692 UNAL-NLP [19] 0.8070 0.7489 0.3550 Meaning Factory [20] 0.8268 0.7721 0.3224 ECNU [21] 0.8414 – – Mean vectors [22] 0.7577 0.6738 0.4557 DT-RNN [23] 0.7923 0.7319 0.3822 SDT-RNN [23] 0.7900 0.7304 0.3848 LSTM [22] 0.8528 0.7911 0.2831 Bidirectional LSTM [22] 0.8567 0.7966 0.2736 Dependency Tree-LSTM [22] 0.8676 0.8083 0.2532 bow 0.7823 0.7235 0.3975 uni-skip 0.8477 0.7780 0.2872 bi-skip 0.8405 0.7696 0.2995 combine-skip 0.8584 0.7916 0.2687 combine-skip+COCO 0.8655 0.7995 0.2561 Method Acc F1 feats [24] 73.2 RAE+DP [24] 72.6 RAE+feats [24] 74.2 RAE+DP+feats [24] 76.8 83.6 FHS [25] 75.0 82.7 PE [26] 76.1 82.7 WDDP [27] 75.6 83.0 MTMETRICS [28] 77.4 84.1 TF-KLD [29] 80.4 86.0 bow 67.8 80.3 uni-skip 73.0 81.9 bi-skip 71.2 81.2 combine-skip 73.0 82.0 combine-skip + feats 75.8 83.0 Table 3: Left: Test set results on the SICK semantic relatedness subtask. The evaluation metrics are Pearson’s r, Spearman’s ρ, and mean squared error. The first group of results are SemEval 2014 submissions, while the second group are results reported by [22]. Right: Test set results on the Microsoft Paraphrase Corpus. The evaluation metrics are classification accuracy and F1 score. Top: recursive autoencoder variants. Middle: the best published results on this dataset. a score of 5 indicates they are highly related. The dataset comes with a predefined split of 4500 training pairs, 500 development pairs and 4927 testing pairs. All sentences are derived from existing image and video annotation datasets. The evaluation metrics are Pearson’s r, Spearman’s ρ, and mean squared error. Given the difficulty of this task, many existing systems employ a large amount of feature engineering and additional resources. Thus, we test how well our learned representations fair against heavily engineered pipelines. Recently, [22] showed that learning representations with LSTM or Tree-LSTM for the task at hand is able to outperform these existing systems. We take this one step further and see how well our vectors learned from a completely different task are able to capture semantic relatedness when only a linear model is used on top to predict scores. To represent a sentence pair, we use two features. Given two skip-thought vectors u and v, we compute their component-wise product u · v and their absolute difference |u −v| and concatenate them together. These two features were also used by [22]. To predict a score, we use the same setup as [22]. Let r⊤= [1, . . . , 5] be an integer vector from 1 to 5. We compute a distribution p as a function of prediction scores y given by pi = y −⌊y⌋if i = ⌊y⌋+ 1, pi = ⌊y⌋−y + 1 if i = ⌊y⌋and 0 otherwise. These then become our targets for a logistic regression classifier. At test time, given new sentence pairs we first compute targets ˆp and then compute the related score as r⊤ˆp. As an additional comparison, we also explored appending features derived from an image-sentence embedding model trained on COCO (see section 3.4). Given vectors u and v, we obtain vectors u′ and v′ from the learned linear embedding model and compute features u′ · v′ and |u′ −v′|. These are then concatenated to the existing features. Table 3 (left) presents our results. First, we observe that our models are able to outperform all previous systems from the SemEval 2014 competition. It highlights that skip-thought vectors learn representations that are well suited for semantic relatedness. Our results are comparable to LSTMs whose representations are trained from scratch on this task. Only the dependency tree-LSTM of [22] performs better than our results. We note that the dependency tree-LSTM relies on parsers whose training data is very expensive to collect and does not exist for all languages. We also observe using features learned from an image-sentence embedding model on COCO gives an additional performance boost, resulting in a model that performs on par with the dependency tree-LSTM. To get a feel for the model outputs, Table 4 shows example cases of test set pairs. Our model is able to accurately predict relatedness on many challenging cases. On some examples, it fails to pick up on small distinctions that drastically change a sentence meaning, such as tricks on a motorcycle versus tricking a person on a motorcycle. 3.3 Paraphrase detection The next task we consider is paraphrase detection on the Microsoft Research Paraphrase Corpus [31]. On this task, two sentences are given and one must predict whether or not they are 5 Sentence 1 Sentence 2 GT pred A little girl is looking at a woman in costume A young girl is looking at a woman in costume 4.7 4.5 A little girl is looking at a woman in costume The little girl is looking at a man in costume 3.8 4.0 A little girl is looking at a woman in costume A little girl in costume looks like a woman 2.9 3.5 A sea turtle is hunting for fish A sea turtle is hunting for food 4.5 4.5 A sea turtle is not hunting for fish A sea turtle is hunting for fish 3.4 3.8 A man is driving a car The car is being driven by a man 5 4.9 There is no man driving the car A man is driving a car 3.6 3.5 A large duck is flying over a rocky stream A duck, which is large, is flying over a rocky stream 4.8 4.9 A large duck is flying over a rocky stream A large stream is full of rocks, ducks and flies 2.7 3.1 A person is performing acrobatics on a motorcycle A person is performing tricks on a motorcycle 4.3 4.4 A person is performing tricks on a motorcycle The performer is tricking a person on a motorcycle 2.6 4.4 Someone is pouring ingredients into a pot Someone is adding ingredients to a pot 4.4 4.0 Nobody is pouring ingredients into a pot Someone is pouring ingredients into a pot 3.5 4.2 Someone is pouring ingredients into a pot A man is removing vegetables from a pot 2.4 3.6 Table 4: Example predictions from the SICK test set. GT is the ground truth relatedness, scored between 1 and 5. The last few results show examples where slight changes in sentence structure result in large changes in relatedness which our model was unable to score correctly. COCO Retrieval Image Annotation Image Search Model R@1 R@5 R@10 Med r R@1 R@5 R@10 Med r Random Ranking 0.1 0.6 1.1 631 0.1 0.5 1.0 500 DVSA [32] 38.4 69.6 80.5 1 27.4 60.2 74.8 3 GMM+HGLMM [33] 39.4 67.9 80.9 2 25.1 59.8 76.6 4 m-RNN [34] 41.0 73.0 83.5 2 29.0 42.2 77.0 3 bow 33.6 65.8 79.7 3 24.4 57.1 73.5 4 uni-skip 30.6 64.5 79.8 3 22.7 56.4 71.7 4 bi-skip 32.7 67.3 79.6 3 24.2 57.1 73.2 4 combine-skip 33.8 67.7 82.1 3 25.9 60.0 74.6 4 Table 5: COCO test-set results for image-sentence retrieval experiments. R@K is Recall@K (high is good). Med r is the median rank (low is good). paraphrases. The training set consists of 4076 sentence pairs (2753 which are positive) and the test set has 1725 pairs (1147 are positive). We compute a vector representing the pair of sentences in the same way as on the SICK dataset, using the component-wise product u · v and their absolute difference |u −v| which are then concatenated together. We then train logistic regression on top to predict whether the sentences are paraphrases. Cross-validation is used for tuning the L2 penalty. As in the semantic relatedness task, paraphrase detection has largely been dominated by extensive feature engineering, or a combination of feature engineering with semantic spaces. We report experiments in two settings: one using the features as above and the other incorporating basic statistics between sentence pairs, the same features used by [24]. These are referred to as feats in our results. We isolate the results and baselines used in [24] as well as the top published results on this task. Table 3 (right) presents our results, from which we can observe the following: (1) skip-thoughts alone outperform recursive nets with dynamic pooling when no hand-crafted features are used, (2) when other features are used, recursive nets with dynamic pooling works better, and (3) when skipthoughts are combined with basic pairwise statistics, it becomes competitive with the state-of-the-art which incorporate much more complicated features and hand-engineering. This is a promising result as many of the sentence pairs have very fine-grained details that signal if they are paraphrases. 3.4 Image-sentence ranking We next consider the task of retrieving images and their sentence descriptions. For this experiment, we use the Microsoft COCO dataset [35] which is the largest publicly available dataset of images with high-quality sentence descriptions. Each image is annotated with 5 captions, each from different annotators. Following previous work, we consider two tasks: image annotation and image search. For image annotation, an image is presented and sentences are ranked based on how well they describe the query image. The image search task is the reverse: given a caption, we retrieve images that are a good fit to the query. The training set comes with over 80,000 images each with 5 captions. For development and testing we use the same splits as [32]. The development and test sets each contain 1000 images and 5000 captions. Evaluation is performed using Recall@K, namely the mean number of images for which the correct caption is ranked within the top-K retrieved results 6 (and vice-versa for sentences). We also report the median rank of the closest ground truth result from the ranked list. The best performing results on image-sentence ranking have all used RNNs for encoding sentences, where the sentence representation is learned jointly. Recently, [33] showed that by using Fisher vectors for representing sentences, linear CCA can be applied to obtain performance that is as strong as using RNNs for this task. Thus the method of [33] is a strong baseline to compare our sentence representations with. For our experiments, we represent images using 4096-dimensional OxfordNet features from their 19-layer model [36]. For sentences, we simply extract skip-thought vectors for each caption. The training objective we use is a pairwise ranking loss that has been previously used by many other methods. The only difference is the scores are computed using only linear transformations of image and sentence inputs. The loss is given by: X x X k max{0, α −s(Ux, Vy) + s(Ux, Vyk)} + X y X k max{0, α −s(Vy, Ux) + s(Vy, Uxk)}, where x is an image vector, y is the skip-thought vector for the groundtruth sentence, yk are vectors for constrastive (incorrect) sentences and s(·, ·) is the image-sentence score. Cosine similarity is used for scoring. The model parameters are {U, V} where U is the image embedding matrix and V is the sentence embedding matrix. In our experiments, we use a 1000 dimensional embedding, margin α = 0.2 and k = 50 contrastive terms. We trained for 15 epochs and saved our model anytime the performance improved on the development set. Table 5 illustrates our results on this task. Using skip-thought vectors for sentences, we get performance that is on par with both [32] and [33] except for R@1 on image annotation, where other methods perform much better. Our results indicate that skip-thought vectors are representative enough to capture image descriptions without having to learn their representations from scratch. Combined with the results of [33], it also highlights that simple, scalable embedding techniques perform very well provided that high-quality image and sentence vectors are available. 3.5 Classification benchmarks For our final quantitative experiments, we report results on several classification benchmarks which are commonly used for evaluating sentence representation learning methods. We use 5 datasets: movie review sentiment (MR), customer product reviews (CR), subjectivity/objectivity classification (SUBJ), opinion polarity (MPQA) and question-type classification (TREC). On all datasets, we simply extract skip-thought vectors and train a logistic regression classifier on top. 10-fold cross-validation is used for evaluation on the first 4 datasets, while TREC has a pre-defined train/test split. We tune the L2 penality using cross-validation (and thus use a nested cross-validation for the first 4 datasets). Method MR CR SUBJ MPQA TREC NB-SVM [37] 79.4 81.8 93.2 86.3 MNB [37] 79.0 80.0 93.6 86.3 cBoW [6] 77.2 79.9 91.3 86.4 87.3 GrConv [6] 76.3 81.3 89.5 84.5 88.4 RNN [6] 77.2 82.3 93.7 90.1 90.2 BRNN [6] 82.3 82.6 94.2 90.3 91.0 CNN [4] 81.5 85.0 93.4 89.6 93.6 AdaSent [6] 83.1 86.3 95.5 93.3 92.4 Paragraph-vector [7] 74.8 78.1 90.5 74.2 91.8 bow 75.0 80.4 91.2 87.0 84.8 uni-skip 75.5 79.3 92.1 86.9 91.4 bi-skip 73.9 77.9 92.5 83.3 89.4 combine-skip 76.5 80.1 93.6 87.1 92.2 combine-skip + NB 80.4 81.3 93.6 87.5 Table 6: Classification accuracies on several standard benchmarks. Results are grouped as follows: (a): bag-of-words models; (b): supervised compositional models; (c) Paragraph Vector (unsupervised learning of sentence representations); (d) ours. Best results overall are bold while best results outside of group (b) are underlined. On these tasks, properly tuned bag-ofwords models have been shown to perform exceptionally well. In particular, the NB-SVM of [37] is a fast and robust performer on these tasks. Skipthought vectors potentially give an alternative to these baselines being just as fast and easy to use. For an additional comparison, we also see to what effect augmenting skip-thoughts with bigram Naive Bayes (NB) features improves performance 3. Table 6 presents our results. On most tasks, skip-thoughts performs about as well as the bag-of-words baselines but fails to improve over methods whose sentence representations are learned directly for the task at hand. This indicates that for tasks like sentiment classification, tuning the representations, even on small datasets, are likely to perform better than learning a generic unsupervised 3We use the code available at https://github.com/mesnilgr/nbsvm 7 (a) TREC (b) SUBJ (c) SICK Figure 2: t-SNE embeddings of skip-thought vectors on different datasets. Points are colored based on their labels (question type for TREC, subjectivity/objectivity for SUBJ). On the SICK dataset, each point represents a sentence pair and points are colored on a gradient based on their relatedness labels. Results best seen in electronic form. sentence vector on much bigger datasets. Finally, we observe that the skip-thoughts-NB combination is effective, particularly on MR. This results in a very strong new baseline for text classification: combine skip-thoughts with bag-of-words and train a linear model. 3.6 Visualizing skip-thoughts As a final experiment, we applied t-SNE [38] to skip-thought vectors extracted from TREC, SUBJ and SICK datasets and the visualizations are shown in Figure 2. For the SICK visualization, each point represents a sentence pair, computed using the concatenation of component-wise and absolute difference of features. Even without the use of relatedness labels, skip-thought vectors learn to accurately capture this property. 4 Conclusion We evaluated the effectiveness of skip-thought vectors as an off-the-shelf sentence representation with linear classifiers across 8 tasks. Many of the methods we compare against were only evaluated on 1 task. The fact that skip-thought vectors perform well on all tasks considered highlight the robustness of our representations. We believe our model for learning skip-thought vectors only scratches the surface of possible objectives. Many variations have yet to be explored, including (a) deep encoders and decoders, (b) larger context windows, (c) encoding and decoding paragraphs, (d) other encoders, such as convnets. It is likely the case that more exploration of this space will result in even higher quality representations. Acknowledgments We thank Geoffrey Hinton for suggesting the name skip-thoughts. We also thank Felix Hill, Kelvin Xu, Kyunghyun Cho and Ilya Sutskever for valuable comments and discussion. This work was supported by NSERC, Samsung, CIFAR, Google and ONR Grant N00014-14-1-0232. References [1] Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP, 2013. [2] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735– 1780, 1997. [3] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. ACL, 2014. [4] Yoon Kim. Convolutional neural networks for sentence classification. EMNLP, 2014. [5] Kyunghyun Cho, Bart van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. SSST-8, 2014. [6] Han Zhao, Zhengdong Lu, and Pascal Poupart. Self-adaptive hierarchical sentence model. IJCAI, 2015. [7] Quoc V Le and Tomas Mikolov. Distributed representations of sentences and documents. ICML, 2014. [8] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. ICLR, 2013. [9] Yukun Zhu, Ryan Kiros, Richard S. Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV, 2015. 8 [10] Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP, pages 1700– 1709, 2013. [11] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. EMNLP, 2014. [12] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [13] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015. [14] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. NIPS Deep Learning Workshop, 2014. [15] Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168, 2013. [16] Andrew M Saxe, James L McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. ICLR, 2014. [17] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. ICLR, 2015. [18] Alice Lai and Julia Hockenmaier. Illinois-lh: A denotational and distributional approach to semantics. SemEval 2014, 2014. [19] Sergio Jimenez, George Duenas, Julia Baquero, Alexander Gelbukh, Av Juan Dios Bátiz, and Av Mendizábal. Unal-nlp: Combining soft cardinality features for semantic textual similarity, relatedness and entailment. SemEval 2014, 2014. [20] Johannes Bjerva, Johan Bos, Rob van der Goot, and Malvina Nissim. The meaning factory: Formal semantics for recognizing textual entailment and determining semantic similarity. SemEval 2014, page 642, 2014. [21] Jiang Zhao, Tian Tian Zhu, and Man Lan. Ecnu: One stone two birds: Ensemble of heterogenous measures for semantic relatedness and textual entailment. SemEval 2014, 2014. [22] Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. ACL, 2015. [23] Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. Grounded compositional semantics for finding and describing images with sentences. TACL, 2014. [24] Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In NIPS, 2011. [25] Andrew Finch, Young-Sook Hwang, and Eiichiro Sumita. Using machine translation evaluation techniques to determine sentence-level semantic equivalence. In IWP, 2005. [26] Dipanjan Das and Noah A Smith. Paraphrase identification as probabilistic quasi-synchronous recognition. In ACL, 2009. [27] Stephen Wan, Mark Dras, Robert Dale, and Cécile Paris. Using dependency-based features to take the "para-farce" out of paraphrase. In Proceedings of the Australasian Language Technology Workshop, 2006. [28] Nitin Madnani, Joel Tetreault, and Martin Chodorow. Re-examining machine translation metrics for paraphrase identification. In NAACL, 2012. [29] Yangfeng Ji and Jacob Eisenstein. Discriminative improvements to distributional sentence similarity. In EMNLP, pages 891–896, 2013. [30] Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, Stefano Menini, and Roberto Zamparelli. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. SemEval-2014, 2014. [31] Bill Dolan, Chris Quirk, and Chris Brockett. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, 2004. [32] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR, 2015. [33] Benjamin Klein, Guy Lev, Gil Sadeh, and Lior Wolf. Associating neural word embeddings with deep image representations using fisher vectors. In CVPR, 2015. [34] Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, and Alan Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). ICLR, 2015. [35] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In ECCV, pages 740–755. 2014. [36] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. ICLR, 2015. [37] Sida Wang and Christopher D Manning. Baselines and bigrams: Simple, good sentiment and topic classification. In ACL, 2012. [38] Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. JMLR, 2008. 9 | 2015 | 383 |
5,906 | Collaborative Filtering with Graph Information: Consistency and Scalable Methods Nikhil Rao Hsiang-Fu Yu Pradeep Ravikumar Inderjit S. Dhillon {nikhilr, rofuyu, paradeepr, inderjit}@cs.utexas.edu Department of Computer Science University of Texas at Austin Abstract Low rank matrix completion plays a fundamental role in collaborative filtering applications, the key idea being that the variables lie in a smaller subspace than the ambient space. Often, additional information about the variables is known, and it is reasonable to assume that incorporating this information will lead to better predictions. We tackle the problem of matrix completion when pairwise relationships among variables are known, via a graph. We formulate and derive a highly efficient, conjugate gradient based alternating minimization scheme that solves optimizations with over 55 million observations up to 2 orders of magnitude faster than state-of-the-art (stochastic) gradient-descent based methods. On the theoretical front, we show that such methods generalize weighted nuclear norm formulations, and derive statistical consistency guarantees. We validate our results on both real and synthetic datasets. 1 Introduction Low rank matrix completion approaches are among the most widely used collaborative filtering methods, where a partially observed matrix is available to the practitioner, who needs to impute the missing entries. Specifically, suppose there exists a ratings matrix Y 2 Rm⇥n, and we only observe a subset of the entries Yij, 8(i, j) 2 ⌦, |⌦| = N ⌧mn. The goal is to estimate Yi,j, 8(i, j) /2 ⌦. To this end, one typically looks to solve one of the following (equivalent) programs: ˆZ = arg min Z 1 2kP⌦(Y −Z)k2 F + λzkZk⇤ (1) ˆW, ˆH = arg min W,H 1 2kP⌦(Y −WHT )k2 F + λw 2 kWk2 F + λh 2 kHk2 F (2) where the nuclear norm kZk⇤, given by the sum of singular values, is a tight convex relaxation of the non convex rank penalty, and is equivalent to the regularizer in (2). P⌦(·) is the projection operator that only retains those entries of the matrix that lie in the set ⌦. In many cases however, one not only has the partially observed ratings matrix, but also has access to additional information about the relationships between the variables involved. For example, one might have access to a social network of users. Similarly, one might have access to attributes of items, movies, etc. The nature of the attributes can be fairly arbitrary, but it is reasonable to assume that “similar” users/items share “similar” attributes. A natural question to ask then, is if one can take advantage of this additional information to make better predictions. In this paper, we assume that the row and column variables lie on graphs. The graphs may naturally be part of the data (social networks, product co-purchasing graphs) or they can be constructed from available features. The idea then is to incorporate this additional structural information into the matrix completion setting. 1 We not only require the resulting optimization program to enforce additional constraints on Z, but we also require it to admit efficient optimization algorithms. We show in the sections that follow that this in fact is indeed the case. We also perform a theoretical analysis of our problem when the observed entries of Y are corrupted by additive white Gaussian noise. To summarize, the contributions of our paper are as follows: • We provide a scalable algorithm for matrix completion graph with structural information. Our method relies on efficient Hessian-vector multiplication schemes, and is orders of magnitude faster than (stochastic) gradient descent based approaches. • We make connections with other structured matrix factorization frameworks. Notably, we show that our method generalizes the weighted nuclear norm [21], and methods based on Gaussian generative models [27]. • We derive consistency guarantees for graph regularized matrix completion, and empirically show that our bound is smaller than that of traditional matrix completion, where graph information is ignored. • We empirically validate our claims, and show that our method achieves comparable error rates to other methods, while being significantly more scalable. Related Work and Key Differences For convex methods for matrix factorization, Haeffele et al. [9] provided a framework to use regularizers with norms other than the Euclidean norm in (2). Abernethy et al. [1] considered a kernel based embedding of the data, and showed that the resulting problem can be expressed as a norm minimization scheme. Srebro and Salakhutdinov [21] introduced a weighted nuclear norm, and showed that the method enjoys superior performance as compared to standard matrix completion under a non-uniform sampling scheme. We show that the graph based framework considered in this paper is in fact a generalization of the weighted nuclear norm problem, with non-diagonal weight matrices. In the context of matrix factorization with graph structural information, [5] considered a graph regularized nonnegative matrix factorization framework and proposed a gradient descent based method to solve the problem. In the context of recommendation systems in social networks, Ma et al. [14] modeled the weight of a graph edge1 explicitly in a re-weighted regularization framework. Li and Yeung [13] considered a similar setting to ours, but a key point of difference between all the aforementioned methods and our paper is that we consider the partially observed ratings case. There are some works developing algorithms for the situation with partially observations [12, 26, 27]; however, none of them provides statistical guarantees. Weighted norm minimization has been considered before ([16, 21]) in the context of low rank matrix completion. The thrust of these methods has been to show that despite suboptimal conditions (correlated data, non-uniform sampling), the sample complexity does not change. None of these methods use graph information. We are interested in a complementary question: Given variables conforming to graph information, can we obtain better guarantees under uniform sampling to those achieved by traditional methods? 2 Graph-Structured Matrix Factorization Assume that the “true” target matrix can be factorized as Z? = W ?(H?)T , and there exist a graph (V w, Ew) whose adjacency matrix encodes the relationships between the m rows of W ? and a graph (V h, Eh) for n rows of H?. In particular, two rows (or columns) connected by an edge in the graph are “close” to each other in the Euclidean distance. In the context of graph-based embedding, [3, 4] proposed a smoothing term of the form 1 2 X i,j Ew ij(wi −wj)2 = tr(W T Lap(Ew)W) (3) where Lap(Ew) := Dw −Ew is the graph Laplacian for (V w, Ew), where Dw is the diagonal matrix with Dw ii = P j⇠i Ew ij. Adding (3) into the minimization problem (2) encourages solutions where wi ⇡wj when Ew ij is large. A similar argument holds for H? and the associated graph Laplacian Lap(Eh). 1The authors call this the “trust” between links in a social network 2 We would thus not only want the target matrix to be low rank, but also want the variables W, H to be faithful to the underlying graph structure. To this end, we consider the following problem: min W,H 1 2kP⌦ # Y −WHT $ k2 F +λL 2 {tr(W T Lap(Ew)W) + tr(HT Lap(Eh)H)}+ (4) λw 2 kWk2 F + λh 2 kHk2 F ⌘min W,H 1 2kP⌦ # Y −WHT $ k2 F + 1 2 % tr(W T LwW) + tr(HT LhH) (5) where Lw := λL Lap(Ew) + λwIm, and Lh is defined similarly. Note that we subsume the regularization parameters in the definition of Lw, Lh. Note that kWk2 F = tr(W T ImW). The regularizer in (5) encourages solutions that are smooth with respect to the corresponding graphs. However, the Laplacian matrix can be replaced by other (positive, semi-definite) matrices that encourage structure by different means. Indeed, a very general class of Laplacian based regularizers was considered in [20], where one can replace Lw by a function: hx, ⌧(Lap(E))xi where ⌧(Lap(E)) ⌘ |V | X i=1 ⌧(λi)qiqT i , where {(λi, qi)} constitute the eigen-system of Lap(E) and ⌧(λi) is a scalar function of the eigenvalues. Our case corresponds to ⌧(·) being the identity function. We briefly summarize other schemes that fit neatly into (5), apart from the graph regularizer we consider: Covariance matrices for variables: [27] proposed a kernelized probabilistic matrix factorization (KPMF), which is a generative model to incorporate covariance information of the variables into matrix factorization. They assumed that each row of W ?, H? is generated according to a multivariate Gaussian, and solving the corresponding MAP estimation procedure yields exactly (5), with Lw = C−1 w and Lh = C−1 h , where Cw, Ch are the associated covariance matrices. Feature matrices for variables: Assume that there is a feature matrix X 2 Rm⇥d for objects associated rows. For such X, one can construct a graph (and hence a Laplacian) using various methods such as k-nearest neighbors, ✏-nearest neighbors etc. Moreover, one can assume that there exists a kernel k(xi, xj) that encodes pairwise relations, and we can use the Kernel Gram matrix as a Laplacian. We can thus see that problem (5) is a very general scheme, and can incorporate information available in many different forms. In the sequel, we assume the matrices Lw, Lh are given. In the theoretical analysis in Section 5, for ease of exposition, we further assume that the minimum eigenvalues of Lw, Lh are unity. A general (nonzero) minimum eigenvalue will merely introduce multiplicative constants in our bounds. 3 GRALS: Graph Regularized Alternating Least Squares In this section, we propose efficient algorithms for (5), which is convex with respect to W or H separately. This allows us to employ alternating minimization methods [25] to solve the problem. When Y is fully observed, Li and Yeung [13] propose an alternating minimization scheme using block steepest descent. We deal with the partially observed setting, and propose to apply conjugate gradient (CG), which is known to converge faster than steepest descent, to solve each subproblem. We propose a very efficient Hessian-vector multiplication routine that results in the algorithm being highly scalable, compared to the (stochastic) gradient descent approaches in [14, 27]. We assume that Y 2 Rm⇥n, W 2 Rm⇥k and H 2 Rn⇥k. When optimizing H with W fixed, we obtain the following sub-problem. min H f(H) = 1 2kP⌦ # Y −WHT $ k2 F + 1 2 tr(HT LhH). (6) Optimizing W while H fixed is similar, and thus we only show the details for solving (6). Since Lh is nonsingular, (6) is strongly convex.2 We first present our algorithm for the fully observed case, since it sets the groundwork for the partially observed setting. 2In fact, a nonsingular Lh can be handled using proximal updates, and our algorithm will still apply 3 Algorithm 1 Hv-Multiplication for g(s) • Given: Matrices Lh, W • Initialization: G = W T W • Multiplication: r2g(s0)s: 1 Input: S 2 Rn⇥k s.t. s = vec(S) 2 A SG + LhS 3 Return: vec(A) Algorithm 2 Hv-Multiplication for g⌦(s) • Given: Matrices Lh, W, ⌦ • Multiplication: r2g(s0)s: 1 Input: S 2 Rk⇥n s.t. s = vec(S) 2 Compute K = [k1, . . . , kn] s.t. kj P i2⌦j(wT i sj)wi 3 A K + SLh 4 Return: vec(A) 3.1 Fully Observed Case As in [5, 13] among others, there may be scenarios where Y is completely observed, and the goal is to find the row/column embeddings that conform to the corresponding graphs. In this case, the loss term in (6) is simply kY −WHT k2 F . Thus, setting rf(H) = 0 is equivalent to solving the following Sylvester equation for an n ⇥k matrix H: HW T W + LhH = Y T W. (7) (7) admits a closed form solution. However the standard Bartels-Stewart algorithm for the Sylvester equation requires transforming both W T W and Lh into Schur form (diagonal in our case where W T W and Lh are symmetric) by the QR algorithm, which is time consuming for a large Lh. Thus, we consider applying conjugate gradient (CG) to minimize f(H) directly. We define the following quadratic function: g(s) := 1 2sT Ms −vec # Y T W $T s, s 2 Rnk, M = Ik ⌦Lh + (W T W) ⌦In It is not hard to show that f(H) = g(vec(H)) and so we apply CG to minimize g(s). The most crucial step in CG is the Hessian-vector multiplication. Using the identity (BT ⌦ A) vec(X) = vec(AXB), it follows that (Ik ⌦Lh) s = vec(LhS) , and # (W T W) ⌦In $ s = vec # SW T W $ , where vec(S) = s. Thus the Hessian-vector multiplication can be implemented by a series of matrix multiplications as follows. Ms = vec # LhS + S(W T W) $ , where W T W can be pre-computed and stored in O(k2) space. The details are presented in Algorithm 1. The time complexity for a single CG iteration is O(nnz(Lh)k + nk2), where nnz(·) is the number of non zeros. Since in most practical applications k is generally small, the complexity is essentially O(nnz(Lh)k) as long as nk nnz(Lh). 3.2 Partially Observed Case In this case, the loss term of (6) becomes P (i,j)2⌦(Yij −wT i hj)2, where wT i is the i-th row of W and hj is the j-th column of HT . Similar to the fully observed case, we can define: g⌦(s) := 1 2sT M⌦s −vec # W T Y $T s, where M⌦= ¯B + Lh ⌦Ik, ¯B 2 Rnk⇥nk is a block diagonal matrix with n diagonal blocks Bj 2 Rk⇥k. Bj = P i2⌦j wiwT i , where ⌦j = {i : (i, j) 2 ⌦}. Again, we can see f(H) = g⌦(vec # HT $ ). Note that the transpose HT is used here instead of H, which is used in the fully observed case. For a given s, let S = [s1, . . . sj, . . . sn] be a matrix such that vec(S) = s and K = [k1, . . . , kj, . . . , kn] with kj = Bjsj. Then ¯Bs = vec(K). Note that since n can be very large in practice, it may not be feasible to compute and store all Bj in the beginning. Alternatively, Bjsj can be computed in O(|⌦j|k) time as follows. Bjsj = X i2⌦j (wT i sj)wi. 4 Thus ¯Bs can be computed in O(|⌦|k) time, and the Hessian-vector multiplication M⌦s can be done in O (|⌦|k + nnz(Lh)k) time. See Algorithm 2 for a detailed procedure. As a result, each CG iteration for minimizing g⌦(s) is also very cheap. Remark on Convergence. In [2], it is shown that any local minimizer of (5) is a global minimizer of (5) if k is larger than the true rank of the underlying matrix.3 From [25], the alternating minimization procedure is guaranteed to globally converge to a block coordinate-wise minimum4 of (5). The converged point might not be a local minimizer, but it still yields good performance in practice. Most importantly, since the updates are cheap to perform, our algorithm scales well to large datasets. 4 Convex Connection via Generalized Weighted Nuclear Norm We now show that the regularizer in (5) can be cast as a generalized version of the weighted nuclear norm. The weights in our case will correspond to the scaling factors introduced on the matrices W, H due to the eigenvalues of the shifted graph Laplacians Lw, Lh respectively. 4.1 A weighted atomic norm: From [7], we know that the nuclear norm is the gauge function induced by the atomic set: A⇤= {wihT i : kwik = khik = 1}. Note that all rank one matrices in A⇤have unit Frobenius norm. Now, assume P = [p1, . . . , pm] 2 Rm⇥m is a basis of Rm and S−1/2 p is a diagonal matrix with (S−1/2 p )ii ≥0 encoding the “preference” over the space spanned by pi. The more the preference, the larger the value. Similarly, consider the basis Q and the preference S−1/2 q for Rn. Let A = PS−1/2 p and B = QS−1/2 q , and consider the following “preferential” atomic set: A := { i = wihT i : wi = Aui, hi = Bvi, kuik = kvik = 1}. (8) Clearly, each atom in A has non-unit Frobenius norm. This atomic set allows for biasing of the solutions towards certain atoms. We then define a corresponding atomic norm: kZkA = inf X i2A |ci| s.t. Z = X i2A ci i. (9) It is not hard to verify that kZkA is a norm and {Z : kZkA ⌧} is closed and convex. 4.2 Equivalence to Graph Regularization The graph regularization (5) can be shown to be a special case of the atomic norm (9), as a consequence of the following result: Theorem 1. For any A = PS−1/2 p , B = QS−1/2 q , and corresponding weighted atomic set A , kZkA = inf W,H 1 2{kA−1Wk2 F + kB−1Hk2 F } s.t. Z = WHT . We prove this result in Appendix A. Theorem 1 immediately leads us to the following equivalence result: Corollary 1. Let Lw = UwSwU T w and Lh = UhShU T h be the eigen decomposition for Lw and Lh. We have Tr # W T LwW $ = kA−1Wk2 F , and Tr # HT LhH $ = kB−1Hk2 F , where A = UwS−1/2 w and B = UhS−1/2 h . As a result, kMkA with the preference pair (Uw, S−1/2 w ) for the column space and the preference pair (Uh, S−1/2 h ) for row space is a weighted atomic norm equivalent for the graph regularization using Lw and Lh. The results above allow us to obtain the dual weighted atomic norm for a matrix Z kZk⇤ A = kAT ZB k = kS −1 2 w U T w ZUhS −1 2 h k (10) 3The authors actually show this for a more general class of regularizers. 4Nash equilibrium is used in [25]. 5 which is a weighted spectral norm. An elementary proof of this result can be found in Appendix B. Note that we can then write kZkA = kA−1ZB−T k⇤= kS 1 2wU −1 w ZU −T h S 1 2 h k⇤ (11) In [21], the authors consider a norm similar to (11), but with A, B being diagonal matrices. In the spirit of their nomenclature, we refer to the norm in (11) as the generalized weighted nuclear norm. 5 Statistical Consistency in the Presence of Noisy Measurements In this section, we derive theoretical guarantees for the graph regularized low rank matrix estimators. We first introduce some additional notation. We assume that there is a m ⇥n matrix Z? of rank k with kZ?kF = 1, and N = |⌦| entries of Z? are uniformly sampled5 and revealed to us (i.e., Y = P⌦(Z?)). We further assume an one-to-one mapping between the set of observed indices ⌦ and {1, 2, . . . , N} so that the tth measurement is given by yt = Yi(t),j(t) = hei(t)eT j(t), Z?i + σ pmn⌘t ⌘t ⇠N(0, 1). (12) where h·, ·i denotes the matrix trace inner product, and i(t), j(t) is a randomly selected coordinate pair from [m]⇥[n]. Let A, B are corresponding matrices defined in Corollary 1 for the given Lw, Lh. W.L.O.G, we assume that the minimum singular value of both Lw and Lh is 1. We then define the following graph based complexity measures: ↵g(Z) := pmnkA−1ZB−T k1 kA−1ZB−T kF , βg(Z) := kA−1ZB−T k⇤ kA−1ZB−T kF (13) where k · k1 is the element-wise `1 norm. Finally, we assume that the true matrix Z? can be expressed as a linear combination of atoms from (8) (we define ↵? := ↵g(Z?)): Z? = AU ?(V ?)T BT , U ? 2 Rm⇥k, V ? 2 Rn⇥k, (14) Our goal in this section will be to characterize the solution to the following convex program, where the constraint set precludes selection of overly complex matrices in the sense of (13): ˆZ = arg min Z2C 1 N kP⌦(Y −Z)k2 F +λkZkA where C := ( Z : ↵g(Z)βg(Z) ¯c0 s N log(m + n) ) , (15) where ¯c0 is a constant depending on ↵?. A quick note on solving (15): since k · kA is a weighted nuclear norm, one can resort to proximal point methods [6], or greedy methods developed specifically for atomic norm constrained minimization [18, 22]. The latter are particularly attractive, since the greedy step reduces to computing the maximum singular vectors which can be efficiently computed using power methods. However, such methods will first involve computing the eigen decompositions of the graph Laplacians, and then storing the large, dense matrices A, B. We refrain from resorting to such methods in Section 6, and instead use the efficient framework derived in Section 3. We now state our main theoretical result: Theorem 2. Suppose we observe N entries of the form (12) from a matrix Z? 2 Rm⇥n, with ↵? := ↵g(Z?) and which can be represented using at most k atoms from (8). Let ˆZ be the minimizer of the convex problem (15) with λ ≥C1 q (m+n) log(m+n) N . Then, with high probability, we have k ˆZ −Z?k2 F C↵?2 max % 1, σ2 k(m + n) log(m + n) N + O ✓↵?2 N ◆ where C, C1 are positive constants. See Appendix C for the detailed proof. A proof sketch is as follows: 5Our results can be generalized to non uniform sampling schemes as well. 6 Proof Sketch: There are three major portions of the proof: • Using the fact that Z? has unit Frobenius norm and can be expressed as a combination of at most k atoms, we can show kZ?kA p k (Appendix C.1) • Using (10), we can derive a bound for the dual norm of the gradient of the loss L(Z), given by krL(Z)k⇤ A = kS −1 2 w U T w rL(Z)UhS −1 2 h k. (Appendix C.2) • Finally, using (13), we define a notion of restricted strong convexity (RSC) that the “error” matrices Z? −ˆZ lie in. The proof follows closely along the lines of the equivalent result in [16], with appropriate modifications to accommodate our generalized weighted nuclear norm. (Appendix C.3). 5.1 Comparison to Standard Matrix Completion: It is instructive to consider our result in the context of noisy matrix completion with uniform samples. In this case, one would replace Lw, Lh by identity matrices, effectively ignoring graph information available. Specifically, the “standard” notion of spikiness (↵n := pmn kZk1 kZkF ) defined in [16] will apply, and the corresponding error bound (Theorem 2) will have ↵? replaced by ↵n(Z?). In general, it is hard to quantify the relationship between ↵g and ↵n, and a detailed comparison is an interesting topic for future work. However, we show below using simulations for various scenarios that the former is much smaller than the latter. We generate m ⇥m matrices of rank k = 10, M = U⌃V T with U, V being random orthonormal matrices and ⌃having diagonal elements picked from a uniform[0, 1] distribution. We generate graphs at random using the schemes discussed below, and set Z = AMBT , with A, B as defined in Corollary 1. We then compute ↵n, ↵g for various m. Comparing ↵g to ↵n: Most real world graphs exhibit a power law degree distribution. We generated graphs with the ith node having degree (m ⇥ip) with varying negative p values. Figure 1(a) shows that as p ! 0 from below, the gains received from using our norm is clear compared to the standard nuclear norm. We also observe that in general the weighted formulation is never worse then unweighted (The dotted magenta line is ↵n/↵g = 1). The same applies for random graphs, where there is an edge between each (i, j) with varying probability p (Figure 1(b)). −0.1 −0.2 −0.5 −1 −1.5 −2 0 5 10 15 20 25 p αn / αg m = 100 m = 200 m = 300 (a) Power Law 0.1 0.15 0.2 0.25 0.5 1 1 2 3 4 5 p αn / αg m = 100 m = 200 m = 300 (b) Random 0 1 2 3 4 0 1 2 3 4 5 6x 10 −4 # measurements x 1000 MSE GWNN NN (c) Sample Complexity Figure 1: (a), (b): Ratio of spikiness measures for traditional matrix completion and our formulation. (c): Sample complexity for the nuclear norm (NN) and generalized weighted nuclear norm (GWNN) Sample Complexity: We tested the sample complexity needed to recover a m = n = 200, k = 20 matrix, generated from a power law distributed graph with p = −0.5. Figure 1(c) again outlines that the atomic formulation requires fewer examples to get an accurate recovery. We average the results over 10 independent runs, and we used [18] to solve the atomic norm constrained problem. 6 Experiments on Real Datasets Comparison to Related Formulations: We compare GRALS to other methods that incorporate side information for matrix completion: the ADMM method of [12] that regularizes the entire target matrix; using known features (IMC) [10, 24]; and standard matrix completion (MC). We use the MOVIELENS 100k dataset,6 that has user/movie features along with the ratings matrix. The dataset contains user features (such as age (numeric), gender (binary), and occupation), which we map 6http://grouplens.org/datasets/movielens/ 7 −4 −2 0 2 4 6 8 0.9 0.95 1 1.05 1.1 1.15 1.2 1.25 log10(time) (s) RMSE ADMM MC GRALS Figure 2: Time comparison of different methods on MOVIELENS 100k Method RMSE IMC 1.653 Global mean 1.154 User mean 1.063 Movie mean 1.033 ADMM 0.996 MC 0.973 GRALS 0.945 Table 1: RMSE on the MOVIELENS dataset Table 2: Data statistics. Dataset # users # items # ratings # links rank used Flixster ([11]) 147,612 48,794 8,196,077 2,538,746 10 Douban ([14]) 129,490 58,541 16,830,839 1,711,802 10 YahooMusic ([8]) 249,012 296,111 55,749,965 57,248,136 20 into a 22 dimensional feature vector per user. We then construct a 10-nearest neighbor graph using the euclidean distance metric. We do the same for the movies, except in this case we have an 18 dimensional feature vector per movie. For IMC, we use the feature vectors directly. We trained a model of rank 10, and chose optimal parameters by cross validation. Table 1 shows the RMSE obtained for the methods considered. Figure 2 shows that the ADMM method, while obtaining a reasonable RMSE does not scale well, since one has to compute an SVD at each iteration. Scalability of GRALS: We now demonstrate that the proposed GRALS method is more efficient than other state-of-the-art methods for solving the graph-regularized matrix factorization problem (5). We compare GRALS to the SGD method in [27], and GD: ALS with simple gradient descent. We consider three large-scale real-world collaborate filtering datasets with graph information: see Table 2 for details.7 We randomly select 90% of ratings as the training set and use the remaining 10% as the test set. All the experiments are performed on an Intel machine with Xeon CPU E52680 v2 Ivy Bridge and enough RAM. Figure 3 shows orders of magnitude improvement in time compared to SGD. More experimental results are provided in the supplementary material. (a) Flixster (b) Douban (c) YahooMusic Figure 3: Comparison of GRALS, GD, and SGD. The x-axis is the computation time in log-scale. 7 Discussion In this paper, we have considered the problem of collaborative filtering with graph information for users and/or items, and showed that it can be cast as a generalized weighted nuclear norm problem. We derived statistical consistency guarantees for our method, and developed a highly scalable alternating minimization method. Experiments on large real world datasets show that our method achieves ⇠2 orders of magnitude speedups over competing approaches. Acknowledgments This research was supported by NSF grant CCF-1320746. H.-F. Yu acknowledges support from an Intel PhD fellowship. NR was supported by an ICES fellowship. 7See more details in Appendix D. 8 References [1] Jacob Abernethy, Francis Bach, Theodoros Evgeniou, and Jean-Philippe Vert. Low-rank matrix factorization with attributes. arXiv preprint cs/0611124, 2006. [2] Francis Bach, Julien Mairal, and Jean Ponce. Convex sparse matrix factorizations. CoRR, abs/0812.1869, 2008. [3] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, volume 14, pages 585–591, 2001. [4] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373–1396, 2003. [5] Deng Cai, Xiaofei He, Jiawei Han, and Thomas S Huang. Graph regularized nonnegative matrix factorization for data representation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 33(8): 1548–1560, 2011. [6] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [7] Venkat Chandrasekaran, Benjamin Recht, Pablo A Parrilo, and Alan S Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. [8] Gideon Dror, Noam Koenigstein, Yehuda Koren, and Markus Weimer. The yahoo! music dataset and kdd-cup’11. In KDD Cup, pages 8–18, 2012. [9] Benjamin Haeffele, Eric Young, and Rene Vidal. Structured low-rank matrix factorization: Optimality, algorithm, and applications to image processing. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 2007–2015, 2014. [10] Prateek Jain and Inderjit S Dhillon. Provable inductive matrix completion. arXiv preprint arXiv:1306.0626, 2013. [11] Mohsen Jamali and Martin Ester. A matrix factorization technique with trust propagation for recommendation in social networks. In Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys ’10, pages 135–142, 2010. [12] Vassilis Kalofolias, Xavier Bresson, Michael Bronstein, and Pierre Vandergheynst. Matrix completion on graphs. (EPFL-CONF-203064), 2014. [13] Wu-Jun Li and Dit-Yan Yeung. Relation regularized matrix factorization. In 21st International Joint Conference on Artificial Intelligence, 2009. [14] Hao Ma, Dengyong Zhou, Chao Liu, Michael R. Lyu, and Irwin King. Recommender systems with social regularization. In Proceedings of the fourth ACM international conference on Web search and data mining, WSDM ’11, pages 287–296, Hong Kong, China, 2011. [15] Paolo Massa and Paolo Avesani. Trust-aware bootstrapping of recommender systems. ECAI Workshop on Recommender Systems, pages 29–33, 2006. [16] Sahand Negahban and Martin J Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. The Journal of Machine Learning Research, 13(1):1665–1697, 2012. [17] Sahand N Negahban, Pradeep Ravikumar, Martin J Wainwright, and Bin Yu. A unified framework for high-dimensional analysis of m-estimators with decomposable regularizers. Statistical Science, 27(4): 538–557, 2012. [18] Nikhil Rao, Parikshit Shah, and Stephen Wright. Conditional gradient with enhancement and truncation for atomic-norm regularization. NIPS workshop on Greedy Algorithms, 2013. [19] Benjamin Recht. A simpler approach to matrix completion. The Journal of Machine Learning Research, 12:3413–3430, 2011. [20] Alexander J Smola and Risi Kondor. Kernels and regularization on graphs. In Learning theory and kernel machines, pages 144–158. Springer, 2003. [21] Nathan Srebro and Ruslan R Salakhutdinov. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. In Advances in Neural Information Processing Systems, pages 2056–2064, 2010. [22] Ambuj Tewari, Pradeep K Ravikumar, and Inderjit S Dhillon. Greedy algorithms for structurally constrained high dimensional problems. In Advances in Neural Information Processing Systems, pages 882– 890, 2011. [23] Roman Vershynin. A note on sums of independent random matrices after ahlswede-winter. Lecture notes, 2009. [24] Miao Xu, Rong Jin, and Zhi-Hua Zhou. Speedup matrix completion with side information: Application to multi-label learning. In Advances in Neural Information Processing Systems, pages 2301–2309, 2013. [25] Yangyang. Xu and Wotao Yin. A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion. SIAM Journal on Imaging Sciences, 6(3):1758–1789, 2013. [26] Zhou Zhao, Lijun Zhang, Xiaofei He, and Wilfred Ng. Expert finding for question answering via graph regularized matrix completion. Knowledge and Data Engineering, IEEE Transactions on, PP(99), 2014. [27] Tinghui Zhou, Hanhuai Shan, Arindam Banerjee, and Guillermo Sapiro. Kernelized probabilistic matrix factorization: Exploiting graphs and side information. In SDM, volume 12, pages 403–414. SIAM, 2012. 9 | 2015 | 384 |
5,907 | Gaussian Process Random Fields David A. Moore and Stuart J. Russell Computer Science Division University of California, Berkeley Berkeley, CA 94709 {dmoore, russell}@cs.berkeley.edu Abstract Gaussian processes have been successful in both supervised and unsupervised machine learning tasks, but their computational complexity has constrained practical applications. We introduce a new approximation for large-scale Gaussian processes, the Gaussian Process Random Field (GPRF), in which local GPs are coupled via pairwise potentials. The GPRF likelihood is a simple, tractable, and parallelizeable approximation to the full GP marginal likelihood, enabling latent variable modeling and hyperparameter selection on large datasets. We demonstrate its effectiveness on synthetic spatial data as well as a real-world application to seismic event location. 1 Introduction Many machine learning tasks can be framed as learning a function given noisy information about its inputs and outputs. In regression and classification, we are given inputs and asked to predict the outputs; by contrast, in latent variable modeling we are given a set of outputs and asked to reconstruct the inputs that could have produced them. Gaussian processes (GPs) are a flexible class of probability distributions on functions that allow us to approach function-learning problems from an appealingly principled and clean Bayesian perspective. Unfortunately, the time complexity of exact GP inference is O(n3), where n is the number of data points. This makes exact GP calculations infeasible for real-world data sets with n > 10000. Many approximations have been proposed to escape this limitation. One particularly simple approximation is to partition the input space into smaller blocks, replacing a single large GP with a multitude of local ones. This gains tractability at the price of a potentially severe independence assumption. In this paper we relax the strong independence assumptions of independent local GPs, proposing instead a Markov random field (MRF) of local GPs, which we call a Gaussian Process Random Field (GPRF). A GPRF couples local models via pairwise potentials that incorporate covariance information. This yields a surrogate for the full GP marginal likelihood that is simple to implement and can be tractably evaluated and optimized on large datasets, while still enforcing a smooth covariance structure. The task of approximating the marginal likelihood is motivated by unsupervised applications such as the GP latent variable model [1], but examining the predictions made by our model also yields a novel interpretation of the Bayesian Committee Machine [2]. We begin by reviewing GPs and MRFs, and some existing approximation methods for large-scale GPs. In Section 3 we present the GPRF objective and examine its properties as an approximation to the full GP marginal likelihood. We then evaluate it on synthetic data as well as an application to seismic event location. 1 (a) Full GP. (b) Local GPs. (c) Bayesian committee machine. Figure 1: Predictive distributions on a toy regression problem. 2 Background 2.1 Gaussian processes Gaussian processes [3] are distributions on real-valued functions. GPs are parameterized by a mean function µθ(x), typically assumed without loss of generality to be µ(x) = 0, and a covariance function (sometimes called a kernel) kθ(x, x′), with hyperparameters θ. A common choice is the squared exponential covariance, kSE(x, x′) = σ2 f exp −1 2∥x −x′∥2/ℓ2 , with hyperparameters σ2 f and ℓspecifying respectively a prior variance and correlation lengthscale. We say that a random function f(x) is Gaussian process distributed if, for any n input points X, the vector of function values f = f(X) is multivariate Gaussian, f ∼N(0, kθ(X, X)). In many applications we have access only to noisy observations y = f + ε for some noise process ε. If the noise is iid Gaussian, i.e., ε ∼N(0, σ2 nI), then the observations are themselves Gaussian, y ∼N(0, Ky), where Ky = kθ(X, X) + σ2 nI. The most common application of GPs is to Bayesian regression [3], in which we attempt to predict the function values f ∗at test points X∗via the conditional distribution given the training data, p(f ∗|y; X, X∗, θ). Sometimes, however, we do not observe the training inputs X, or we observe them only partially or noisily. This setting is known as the Gaussian Process Latent Variable Model (GPLVM) [1]; it uses GPs as a model for unsupervised learning and nonlinear dimensionality reduction. The GP-LVM setting typically involves multi-dimensional observations, Y = (y(1), . . . , y(D)), with each output dimension y(d) modeled as an independent Gaussian process. The input locations and/or hyperparameters are typically sought via maximization of the marginal likelihood L(X, θ) = log p(Y ; X, θ) = D X i=1 −1 2 log |Ky| −1 2yT i K−1 y yi + C = −D 2 log |Ky| −1 2tr(K−1 y Y Y T ) + C, (1) though some recent work [4, 5] attempts to recover an approximate posterior on X by maximizing a variational bound. Given a differentiable covariance function, this maximization is typically performed by gradient-based methods, although local maxima can be a significant concern as the marginal likelihood is generally non-convex. 2.2 Scalability and approximate inference The main computational difficulty in GP methods is the need to invert or factor the kernel matrix Ky, which requires time cubic in n. In GP-LVM inference this must be done at every optimization step to evaluate (1) and its derivatives. This complexity has inspired a number of approximations. The most commonly studied are inducingpoint methods, in which the unknown function is represented by its values at a set of m inducing points, where m ≪n. These points can be chosen by maximizing the marginal likelihood in a surrogate model [6, 7] or by minimizing the KL divergence between the approximate and exact GP posteriors [8]. Inference in such models can typically be done in O(nm2) time, but this comes at the price of reduced representational capacity: while smooth functions with long lengthscales may be compactly represented by a small number of inducing points, for quickly-varying functions with 2 significant local structure it may be difficult to find any faithful representation more compact than the complete set of training observations. A separate class of approximations, so-called “local” GP methods [3, 9, 10], involves partitioning the inputs into blocks of m points each, then modeling each block with an independent Gaussian process. If the partition is spatially local, this corresponds to a covariance function that imposes independence between function values in different regions of the input space. Computationally, each block requires only O(m3) time; the total time is linear in the number of blocks. Local approximations preserve short-lengthscale structure within each block, but their harsh independence assumptions can lead to predictive discontinuities and inaccurate uncertainties (Figure 1b). These assumptions are problematic for GP-LVM inference because the marginal likelihood becomes discontinuous at block boundaries. Nonetheless, local GPs sometimes work very well in practice, achieving results comparable to more sophisticated methods in a fraction of the time [11]. The Bayesian Committee Machine (BCM) [2] attempts to improve on independent local GPs by averaging the predictions of multiple GP experts. The model is formally equivalent to an inducingpoint model in which the test points are the inducing points, i.e., it assumes that the training blocks are conditionally independent given the test data. The BCM can yield high-quality predictions that avoid the pitfalls of local GPs (Figure 1c), while maintaining scalability to very large datasets [12]. However, as a purely predictive approximation, it is unhelpful in the GP-LVM setting, where we are interested in the likelihood of our training set irrespective of any particular test data. The desire for a BCM-style approximation to the marginal likelihood was part of the motivation for this current work; in Section 3.2 we show that the GPRF proposed in this paper can be viewed as such a model. Mixture-of-experts models [13, 14] extend the local GP concept in a different direction: instead of deterministically assigning points to GP models based on their spatial locations, they treat the assignments as unobserved random variables and do inference over them. This allows the model to adapt to different functional characteristics in different regions of the space, at the price of a more difficult inference task. We are not aware of mixture-of-experts models being applied in the GP-LVM setting, though this should in principle be possible. Simple building blocks are often combined to create more complex approximations. The PIC approximation [15] blends a global inducing-point model with local block-diagonal covariances, thus capturing a mix of global and local structure, though with the same boundary discontinuities as in “vanilla” local GPs. A related approach is the use of covariance functions with compact support [16] to capture local variation in concert with global inducing points. [11] surveys and compares several approximate GP regression methods on synthetic and real-world datasets. Finally, we note here the similar title of [17], which is in fact orthogonal to the present work: they use a random field as a prior on input locations, whereas this paper defines a random field decomposition of the GP model itself, which may be combined with any prior on X. 2.3 Markov Random Fields We recall some basic theory regarding Markov random fields (MRFs), also known as undirected graphical models [18]. A pairwise MRF consists of an undirected graph (V, E), along with node potentials ψi and edge potentials ψij, which define an energy function on a random vector y, E(y) = X i∈V ψi(yi) + X (i,j)∈E ψij(yi, yj), (2) where y is partitioned into components yi identified with nodes in the graph. This energy in turn defines a probability density, the “Gibbs distribution”, given by p(y) = 1 Z exp(−E(y)) where Z = R exp(−E(z))dz is a normalizing constant. Gaussian random fields are the special case of pairwise MRFs in which the Gibbs distribution is multivariate Gaussian. Given a partition of y into sub-vectors y1, y2, . . . , yM, a zero-mean Gaussian distribution with covariance K and precision matrix J = K−1 can be expressed by potentials ψi(yi) = −1 2yT i Jiiyi, ψij(yi, yj) = −yT i Jijyj (3) where Jij is the submatrix of J corresponding to the sub-vectors yi, yj. The normalizing constant Z = (2π)n/2|K|1/2 involves the determinant of the covariance matrix. Since edges whose potentials 3 are zero can be dropped without effect, the nonzero entries of the precision matrix can be seen as specifying the edges present in the graph. 3 Gaussian Process Random Fields We consider a vector of n real-valued1 observations y ∼N(0, Ky) modeled by a GP, where Ky is implicitly a function of input locations X and hyperparameters θ. Unless otherwise specified, all probabilities p(yi), p(yi, yj), etc., refer to marginals of this full GP. We would like to perform gradient-based optimization on the marginal likelihood (1) with respect to X and/or θ, but suppose that the cost of doing so directly is prohibitive. In order to proceed, we assume a partition y = (y1, y2, . . . , yM) of the observations into M blocks of size at most m, with an implied corresponding partition X = (X1, X2, . . . , XM) of the (perhaps unobserved) inputs. The source of this partition is not a focus of the current work; we might imagine that the blocks correspond to spatially local clusters of input points, assuming that we have noisy observations of the X values or at least a reasonable guess at an initialization. We let Kij = covθ(yi, yj) denote the appropriate submatrix of Ky, and Jij denote the corresponding submatrix of the precision matrix Jy = K−1 y ; note that Jij ̸= (Kij)−1 in general. 3.1 The GPRF Objective Given the precision matrix Jy, we could use (3) to represent the full GP distribution in factored form as an MRF. This is not directly useful, since computing Jy requires cubic time. Instead we propose approximating the marginal likelihood via a random field in which local GPs are connected by pairwise potentials. Given an edge set which we will initially take to be the complete graph, E = {(i, j)|1 ≤i < j ≤M}, our approximate objective is qGP RF (y; X, θ) = M Y i=1 p(yi) Y (i,j)∈E p(yi, yj) p(yi)p(yj), (4) = M Y i=1 p(yi)1−|Ei| Y (i,j)∈E p(yi, yj) where Ei denotes the neighbors of i in the graph, and p(yi) and p(yi, yj) are marginal probabilities under the full GP; equivalently they are the likelihoods of local GPs defined on the points Xi and Xi ∪Xj respectively. Note that these local likelihoods depend implicitly on X and θ. Taking the log, we obtain the energy function of an unnormalized MRF log qGP RF (y; X, θ) = M X i=1 (1 −|Ei|) log p(yi) + X (i,j)∈E log p(yi, yj) (5) with potentials ψGP RF i (yi) = (1 −|Ei|) log p(yi), ψGP RF ij (yi, yj) = log p(yi, yj). (6) We refer to the approximate objective (5) as qGP RF rather than pGP RF to emphasize that it is not in general a normalized probability density. It can be interpreted as a “Bethe-type” approximation [19], in which a joint density is approximated via overlapping pairwise marginals. In the special case that the full precision matrix Jy induces a tree structure on the blocks of our partition, qGP RF recovers the exact marginal likelihood. (This is shown in the supplementary material.) In general this will not be the case, but in the spirit of loopy belief propagation [20], we consider the tree-structured case as an approximation for the general setting. Before further analyzing the nature of the approximation, we first observe that as a sum of local Gaussian log-densities, the objective (5) is straightforward to implement and fast to evaluate. Each of the O(M 2) pairwise densities requires O((2m)3) = O(m3) time, for an overall complexity of 1The extension to multiple independent outputs is straightforward. 4 O(M 2m3) = O(n2m) when M = n/m. The quadratic dependence on n cannot be avoided by any algorithm that computes similarities between all pairs of training points; however, in practice we will consider “local” modifications in which E is something smaller than all pairs of blocks. For example, if each block is connected only to a fixed number of spatial neighbors, the complexity reduces to O(nm2), i.e., linear in n. In the special case where E is the empty set, we recover the exact likelihood of independent local GPs. It is also straightforward to obtain the gradient of (5) with respect to hyperparameters θ and inputs X, by summing the gradients of the local densities. The likelihood and gradient for each term in the sum can be evaluated independently using only local subsets of the training data, enabling a simple parallel implementation. Having seen that qGP RF can be optimized efficiently, it remains for us to argue its validity as a proxy for the full GP marginal likelihood. Due to space constraints we defer proofs to the supplementary material, though our results are not difficult. We first show that, like the full marginal likelihood (1), qGP RF has the form of a Gaussian distribution, but with a different precision matrix. Theorem 1. The objective qGP RF has the form of an unnormalized Gaussian density with precision matrix ˜J, with blocks ˜Jij given by ˜Jii = K−1 ii + X j∈Ei Q(ij) 11 −K−1 ii , ˜Jij = Q(ij) 12 if (i, j) ∈E 0 otherwise. , (7) where Q(ij) is the local precision matrix Q(ij) defined as the inverse of the marginal covariance, Q(ij) = Q(ij) 11 Q(ij) 12 Q(ij) 21 Q(ij) 22 ! = Kii Kij Kji Kjj −1 . Although the Gaussian density represented by qGP RF is not in general normalized, we show that it is approximately normalized in a certain sense. Theorem 2. The objective qGP RF is approximately normalized in the sense that the optimal value of the Bethe free energy [19], FB(b) = X i∈V Z yi bi(yi)(1 −|Ei|) ln bi(yi) ln ψi(yi) + X (i,j)∈E Z yi,yj bij(yi, yj) ln bij(yi, yj) ψij(yi, yj)) ! ≈log Z, (8) the approximation to the normalizing constant found by loopy belief propagation, is precisely zero. Furthermore, this optimum is obtained when the pseudomarginals bi, bij are taken to be the true GP marginals pi, pij. This implies that loopy belief propagation run on a GPRF would recover the marginals of the true GP. 3.2 Predictive equivalence to the BCM We have introduced qGP RF as a surrogate model for the training set (X, y); however, it is natural to extend the GPRF to make predictions at a set of test points X∗, by including the function values f ∗= f(X∗) as an M +1st block, with an edge to each of the training blocks. The resulting predictive distribution, pGP RF (f ∗|y) ∝qGP RF (f ∗, y) = p(f ∗) M Y i=1 p(yi, f ∗) p(yi)p(f ∗) M Y i=1 p(yi) Y (i,j)∈E p(yi, yj) p(yi)p(yj) ∝p(f ∗)1−M M Y i=1 p(f ∗|yi), (9) corresponds exactly to the prediction of the Bayesian Committee Machine (BCM) [2]. This motivates the GPRF as a natural extension of the BCM as a model for the training set, providing an alternative to 5 (a) Noisy observed locations: mean error 2.48. (b) Full GP: 0.21. (c) GPRF-100: 0.36. (showing grid cells) (d) FITC-500: 4.86. (with inducing points, note contraction) Figure 2: Inferred locations on synthetic data (n = 10000), colored by the first output dimension y1. the standard transductive interpretation of the BCM.2 A similar derivation shows that the conditional distribution of any block yi given all other blocks yj̸=i also takes the form of a BCM prediction, suggesting the possibility of pseudolikelihood training [21], i.e., directly optimizing the quality of BCM predictions on held-out blocks (not explored in this paper). 4 Experiments 4.1 Uniform Input Distribution We first consider a 2D synthetic dataset intended to simulate spatial location tasks such as WiFiSLAM [22] or seismic event location (below), in which we observe high-dimensional measurements but have only noisy information regarding the locations at which those measurements were taken. We sample n points uniformly from the square of side length √n to generate the true inputs X, then sample 50-dimensional output Y from independent GPs with SE kernel k(r) = exp(−(r/ℓ)2) for ℓ= 6.0 and noise standard deviation σn = 0.1. The observed points Xobs ∼N(X, σ2 obsI) arise by corrupting X with isotropic Gaussian noise of standard deviation σobs = 2. The parameters ℓ, σn, and σobs were chosen to generate problems with interesting short-lengthscale structure for which GP-LVM optimization could nontrivially improve the initial noisy locations. Figure 2a shows a typical sample from this model. For local GPs and GPRFs, we take the spatial partition to be a grid with n/m cells, where m is the desired number of points per cell. The GPRF edge set E connects each cell to its eight neighbors (Figure 2c), yielding linear time complexity O(nm2). During optimization, a practical choice is necessary: do we use a fixed partition of the points, or re-assign points to cells as they cross spatial boundaries? The latter corresponds to a coherent (block-diagonal) spatial covariance function, but introduces discontinuities to the marginal likelihood. In our experiments the GPRF was not sensitive to this choice, but local GPs performed more reliably with fixed spatial boundaries (in spite of the discontinuities), so we used this approach for all experiments. For comparison, we also evaluate the Sparse GP-LVM, implemented in GPy [23], which uses the FITC approximation to the marginal likelihood [7]. (We also considered the Bayesian GP-LVM [4], but found it to be more resource-intensive with no meaningful difference in results on this problem.) Here the approximation parameter m is the number of inducing points. We ran L-BFGS optimization to recover maximum a posteriori (MAP) locations, or local optima thereof. Figure 3a shows mean location error (Euclidean distance) for n = 10000 points; at this size it is tractable to compare directly to the full GP-LVM. The GPRF with a large block size (m=1111, corresponding to a 3x3 grid) nearly matches the solution quality of the full GP while requiring less time, while the local methods are quite fast to converge but become stuck at inferior optima. The FITC optimization exhibits an interesting pathology: it initially moves towards a good solution but then diverges towards what turns out to correspond to a contraction of the space (Figure 2d); we conjecture this is because there are not enough inducing points to faithfully represent the full GP 2The GPRF is still transductive, in the sense that adding a test block f ∗will change the marginal distribution on the training observations y, as can be seen explicitly in the precision matrix (7). The contribution of the GPRF is that it provides a reasonable model for the training-set likelihood even in the absence of test points. 6 (a) Mean location error over time for n = 10000, including comparison to full GP. (b) Mean error at convergence as a function of n, with learned lengthscale. (c) Mean location error over time for n = 80000. Figure 3: Results on synthetic data. distribution over the entire space. A partial fix is to allow FITC to jointly optimize over locations and the correlation lengthscale ℓ; this yielded a biased lengthscale estimate ˆℓ≈7.6 but more accurate locations (FITC-500-ℓin Figure 3a). To evaluate scaling behavior, we next considered problems of increasing size up to n = 80000.3 Out of generosity to FITC we allowed each method to learn its own preferred lengthscale. Figure 3b reports the solution quality at convergence, showing that even with an adaptive lengthscale, FITC requires increasingly many inducing points to compete in large spatial domains. This is intractable for larger problems due to O(m3) scaling; indeed, attempts to run at n > 55000 with 2000 inducing points exceeded 32GB of available memory. Recently, more sophisticated inducing-point methods have claimed scalability to very large datasets [24, 25], but they do so with m ≤1000; we expect that they would hit the same fundamental scaling constraints for problems that inherently require many inducing points. On our largest synthetic problem, n = 80000, inducing-point approximations are intractable, as is the full GP-LVM. Local GPs converge more quickly than GPRFs of equal block size, but the GPRFs find higher-quality solutions (Figure 3c). After a short initial period, the best performance always belongs to a GPRF, and at the conclusion of 24 hours the best GPRF solution achieves mean error 42% lower than the best local solution (0.18 vs 0.31). 4.2 Seismic event location We next consider an application to seismic event location, which formed the motivation for this work. Seismic waves can be viewed as high-dimensional vectors generated from an underlying threedimension manifold, namely the Earth’s crust. Nearby events tend to generate similar waveforms; we can model this spatial correlation as a Gaussian process. Prior information regarding the event locations is available from traditional travel-time-based location systems [26], which produce an independent Gaussian uncertainty ellipse for each event. A full probability model of seismic waveforms, accounting for background noise and performing joint alignment of arrival times, is beyond the scope of this paper. To focus specifically on the ability to approximate GP-LVM inference, we used real event locations but generated synthetic waveforms by sampling from a 50-output GP using a Mat´ern kernel [3] with ν = 3/2 and a lengthscale of 40km. We also generated observed location estimates Xobs, by corrupting the true locations with 3The astute reader will wonder how we generated synthetic data on problems that are clearly too large for an exact GP. For these synthetic problems as well as the seismic example below, the covariance matrix is relatively sparse, with only ~2% of entries corresponding to points within six kernel lengthscales of each other. By considering only these entries, we were able to draw samples using a sparse Cholesky factorization, although this required approximately 30GB of RAM. Unfortunately, this approach does not straightforwardly extend to GP-LVM inference under the exact GP, as the standard expression for the marginal likelihood derivatives ∂ ∂xi log p(y) = 1 2tr (K−1 y y)(K−1 y y)T −K−1 y ∂Ky ∂xi involves the full precision matrix K−1 y which is not sparse in general. Bypassing this expression via automatic differentiation through the sparse Cholesky decomposition could perhaps allow exact GP-LVM inference to scale to somewhat larger problems. 7 (a) Event map for seismic dataset. (b) Mean location error over time. Figure 4: Seismic event location task. Gaussian noise of standard deviation 20km in each dimension. Given the observed waveforms and noisy locations, we are interested in recovering the latitude, longitude, and depth of each event. Our dataset consists of 107556 events detected at the Mankachi array station in Kazakstan between 2004 and 2012. Figure 4a shows the event locations, colored to reflect a principle axis tree partition [27] into blocks of 400 points (tree construction time was negligible). The GPRF edge set contains all pairs of blocks for which any two points had initial locations within one kernel lengthscale (40km) of each other. We also evaluated longer-distance connections, but found that this relatively local edge set had the best performance/time tradeoffs: eliminating edges not only speeds up each optimization step, but in some cases actually yielded faster per-step convergence (perhaps because denser edge sets tended to create large cliques for which the pairwise GPRF objective is a poor approximation). Figure 4b shows the quality of recovered locations as a function of computation time; we jointly optimized over event locations as well as two lengthscale parameters (surface distance and depth) and the noise variance σ2 n. Local GPs perform quite well on this task, but the best GPRF achieves 7% lower mean error than the best local GP model (12.8km vs 13.7km, respectively) given equal time. An even better result can be obtained by using the results of a local GP optimization to initialize a GPRF. Using the same partition (m = 800) for both local GPs and the GPRF, this “hybrid” method gives the lowest final error (12.2km), and is dominant across a wide range of wall clock times, suggesting it as a promising practical approach for large GP-LVM optimizations. 5 Conclusions and Future Work The Gaussian process random field is a tractable and effective surrogate for the GP marginal likelihood. It has the flavor of approximate inference methods such as loopy belief propagation, but can be analyzed precisely in terms of a deterministic approximation to the inverse covariance, and provides a new training-time interpretation of the Bayesian Committee Machine. It is easy to implement and can be straightforwardly parallelized. One direction for future work involves finding partitions for which a GPRF performs well, e.g., partitions that induce a block near-tree structure. A perhaps related question is identifying when the GPRF objective defines a normalizable probability distribution (beyond the case of an exact tree structure) and under what circumstances it is a good approximation to the exact GP likelihood. This evaluation in this paper focuses on spatial data; however, both local GPs and the BCM have been successfully applied to high-dimensional regression problems [11, 12], so exploring the effectiveness of the GPRF for dimensionality reduction tasks would also be interesting. Another useful avenue is to integrate the GPRF framework with other approximations: since the GPRF and inducing-point methods have complementary strengths – the GPRF is useful for modeling a function over a large space, while inducing points are useful when the density of available data in some region of the space exceeds what is necessary to represent the function – an integrated method might enable new applications for which neither approach alone would be sufficient. Acknowledgements We thank the anonymous reviewers for their helpful suggestions. This work was supported by DTRA grant #HDTRA-11110026, and by computing resources donated by Microsoft Research under an Azure for Research grant. 8 References [1] Neil D Lawrence. Gaussian process latent variable models for visualisation of high dimensional data. Advances in Neural Information Processing Systems (NIPS), 2004. [2] Volker Tresp. A Bayesian committee machine. Neural Computation, 12(11):2719–2741, 2000. [3] Carl Rasmussen and Chris Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [4] Michalis K Titsias and Neil D Lawrence. Bayesian Gaussian process latent variable model. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2010. [5] Andreas C. Damianou, Michalis K. Titsias, and Neil D. Lawrence. Variational Inference for Latent Variables and Uncertain Inputs in Gaussian Processes. Journal of Machine Learning Research (JMLR), 2015. [6] Joaquin Qui˜nonero-Candela and Carl Edward Rasmussen. A unifying view of sparse approximate Gaussian process regression. Journal of Machine Learning Research (JMLR), 6:1939–1959, 2005. [7] Neil D Lawrence. Learning for larger datasets with the Gaussian process latent variable model. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2007. [8] Michalis K Titsias. Variational learning of inducing variables in sparse Gaussian processes. In International Conference on Artificial Intelligence and Statistics (AISTATS), 2009. [9] Duy Nguyen-Tuong, Matthias Seeger, and Jan Peters. Model learning with local Gaussian process regression. Advanced Robotics, 23(15):2015–2034, 2009. [10] Chiwoo Park, Jianhua Z Huang, and Yu Ding. Domain decomposition approach for fast Gaussian process regression of large spatial data sets. Journal of Machine Learning Research (JMLR), 12:1697–1728, 2011. [11] Krzysztof Chalupka, Christopher KI Williams, and Iain Murray. A framework for evaluating approximation methods for Gaussian process regression. Journal of Machine Learning Research (JMLR), 14:333–350, 2013. [12] Marc Peter Deisenroth and Jun Wei Ng. Distributed Gaussian Processes. In International Conference on Machine Learning (ICML), 2015. [13] Carl Edward Rasmussen and Zoubin Ghahramani. Infinite mixtures of Gaussian process experts. Advances in Neural Information Processing Systems (NIPS), pages 881–888, 2002. [14] Trung Nguyen and Edwin Bonilla. Fast allocation of Gaussian process experts. In International Conference on Machine Learning (ICML), pages 145–153, 2014. [15] Edward Snelson and Zoubin Ghahramani. Local and global sparse Gaussian process approximations. In Artificial Intelligence and Statistics (AISTATS), 2007. [16] Jarno Vanhatalo and Aki Vehtari. Modelling local and global phenomena with sparse Gaussian processes. In Uncertainty in Artificial Intelligence (UAI), 2008. [17] Guoqiang Zhong, Wu-Jun Li, Dit-Yan Yeung, Xinwen Hou, and Cheng-Lin Liu. Gaussian process latent random field. In AAAI Conference on Artificial Intelligence, 2010. [18] Daphne Koller and Nir Friedman. Probabilistic graphical models: Principles and techniques. MIT Press, 2009. [19] Jonathan S Yedidia, William T Freeman, and Yair Weiss. Bethe free energy, Kikuchi approximations, and belief propagation algorithms. Advances in Neural Information Processing Systems (NIPS), 13, 2001. [20] Kevin P Murphy, Yair Weiss, and Michael I Jordan. Loopy belief propagation for approximate inference: An empirical study. In Uncertainty in Artificial Intelligence (UAI), pages 467–475, 1999. [21] Julian Besag. Statistical analysis of non-lattice data. The Statistician, pages 179–195, 1975. [22] Brian Ferris, Dieter Fox, and Neil D Lawrence. WiFi-SLAM using Gaussian process latent variable models. In International Joint Conference on Artificial Intelligence (IJCAI), pages 2480–2485, 2007. [23] The GPy authors. GPy: A Gaussian process framework in Python. http://github.com/ SheffieldML/GPy, 2012–2015. [24] James Hensman, Nicolo Fusi, and Neil D Lawrence. Gaussian processes for big data. In Uncertainty in Artificial Intelligence (UAI), page 282, 2013. [25] Yarin Gal, Mark van der Wilk, and Carl Rasmussen. Distributed variational inference in sparse Gaussian process regression and latent variable models. In Advances in Neural Information Processing Systems (NIPS), 2014. [26] International Seismological Centre. On-line Bulletin. Int. Seis. Cent., Thatcham, United Kingdom, 2015. http://www.isc.ac.uk. [27] James McNames. A fast nearest-neighbor algorithm based on a principal axis search tree. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 23(9):964–976, 2001. 9 | 2015 | 385 |
5,908 | Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation Seunghoon Hong∗Hyeonwoo Noh∗ Bohyung Han Dept. of Computer Science and Engineering, POSTECH, Pohang, Korea {maga33,hyeonwoonoh ,bhhan}@postech.ac.kr Abstract We propose a novel deep neural network architecture for semi-supervised semantic segmentation using heterogeneous annotations. Contrary to existing approaches posing semantic segmentation as a single task of region-based classification, our algorithm decouples classification and segmentation, and learns a separate network for each task. In this architecture, labels associated with an image are identified by classification network, and binary segmentation is subsequently performed for each identified label in segmentation network. The decoupled architecture enables us to learn classification and segmentation networks separately based on the training data with image-level and pixel-wise class labels, respectively. It facilitates to reduce search space for segmentation effectively by exploiting class-specific activation maps obtained from bridging layers. Our algorithm shows outstanding performance compared to other semi-supervised approaches with much less training images with strong annotations in PASCAL VOC dataset. 1 Introduction Semantic segmentation is a technique to assign structured semantic labels—typically, object class labels—to individual pixels in images. This problem has been studied extensively over decades, yet remains challenging since object appearances involve significant variations that are potentially originated from pose variations, scale changes, occlusion, background clutter, etc. However, in spite of such challenges, the techniques based on Deep Neural Network (DNN) demonstrate impressive performance in the standard benchmark datasets such as PASCAL VOC [1]. Most DNN-based approaches pose semantic segmentation as pixel-wise classification problem [2, 3, 4, 5, 6]. Although these approaches have achieved good performance compared to previous methods, training DNN requires a large number of segmentation ground-truths, which result from tremendous annotation efforts and costs. For this reason, reliable pixel-wise segmentation annotations are typically available only for a small number of classes and images, which makes supervised DNNs difficult to be applied to semantic segmentation tasks involving various kinds of objects. Semi- or weakly-supervised learning approaches [7, 8, 9, 10] alleviate the problem in lack of training data by exploiting weak label annotations per bounding box [10, 8] or image [7, 8, 9]. They often assume that a large set of weak annotations is available during training while training examples with strong annotations are missing or limited. This is a reasonable assumption because weak annotations such as class labels for bounding boxes and images require only a fraction of efforts compared to strong annotations, i.e., pixel-wise segmentations. The standard approach in this setting is to update the model of a supervised DNN by iteratively inferring and refining hypothetical segmentation labels using weakly annotated images. Such iterative techniques often work well in practice [8, 10], but training methods rely on ad-hoc procedures and there is no guarantee of convergence; implementation may be tricky and the algorithm may not be straightforward to reproduce. ∗Both authors have equal contribution on this paper. 1 We propose a novel decoupled architecture of DNN appropriate for semi-supervised semantic segmentation, which exploits heterogeneous annotations with a small number of strong annotations— full segmentation masks—as well as a large number of weak annotations—object class labels per image. Our algorithm stands out from the traditional DNN-based techniques because the architecture is composed of two separate networks; one is for classification and the other is for segmentation. In the proposed network, object labels associated with an input image are identified by classification network while figure-ground segmentation of each identified label is subsequently obtained by segmentation network. Additionally, there are bridging layers, which deliver class-specific information from classification to segmentation network and enable segmentation network to focus on the single label identified by classification network at a time. Training is performed on each network separately, where networks for classification and segmentation are trained with image-level and pixel-wise annotations, respectively; training does not require iterative procedure, and algorithm is easy to reproduce. More importantly, decoupling classification and segmentation reduces search space for segmentation significantly, which makes it feasible to train the segmentation network with a handful number of segmentation annotations. Inference in our network is also simple and does not involve any post-processing. Extensive experiments show that our network substantially outperforms existing semi-supervised techniques based on DNNs even with much smaller segmentation annotations, e.g., 5 or 10 strong annotations per class. The rest of the paper is organized as follows. We briefly review related work and introduce overall algorithm in Section 2 and 3, respectively. The detailed configuration of the proposed network is described in Section 4, and training algorithm is presented in Section 5. Section 6 presents experimental results on a challenging benchmark dataset. 2 Related Work Recent breakthrough in semantic segmentation are mainly driven by supervised approaches relying on Convolutional Neural Network (CNN) [2, 3, 4, 5, 6]. Based on CNNs developed for image classification, they train networks to assign semantic labels to local regions within images such as pixels [2, 3, 4] or superpixels [5, 6]. Notably, Long et al. [2] propose an end-to-end system for semantic segmentation by transforming a standard CNN for classification into a fully convolutional network. Later approaches improve segmentation accuracy through post-processing based on fully-connected CRF [3, 11]. Another branch of semantic segmentation is to learn a multi-layer deconvolution network, which also provides a complete end-to-end pipeline [12]. However, training these networks requires a large number of segmentation ground-truths, but the collection of such dataset is a difficult task due to excessive annotation efforts. To mitigate heavy requirement of training data, weakly-supervised learning approaches start to draw attention recently. In weakly-supervised setting, the models for semantic segmentation have been trained with only image-level labels [7, 8, 9] or bounding box class labels [10]. Given weakly annotated training images, they infer latent segmentation masks based on Multiple Instance Learning (MIL) [7, 9] or Expectation-Maximization (EM) [8] framework based on the CNNs for supervised semantic segmentation. However, performance of weakly supervised learning approaches except [10] is substantially lower than supervised methods, mainly because there is no direct supervision for segmentation during training. Note that [10] requires bounding box annotations as weak supervision, which are already pretty strong and significantly more expensive to acquire than image-level labels. Semi-supervised learning is an alternative to bridge the gap between fully- and weakly-supervised learning approaches. In the standard semi-supervised learning framework, given only a small number of training images with strong annotations, one needs to infer the full segmentation labels for the rest of the data. However, it is not plausible to learn a huge number of parameters in deep networks reliably in this scenario. Instead, [8, 10] train the models based on heterogeneous annotations—a large number of weak annotations as well as a small number strong annotations. This approach is motivated from the facts that the weak annotations, i.e., object labels per bounding box or image, is much more easily accessible than the strong ones and that the availability of the weak annotations is useful to learn a deep network by mining additional training examples with full segmentation masks. Based on supervised CNN architectures, they iteratively infer and refine pixel-wise segmentation labels of weakly annotated images with guidance of strongly annotated images, where image-level labels [8] and bounding box annotations [10] are employed as weak annotations. They 2 Figure 1: The architecture of the proposed network. While classification and segmentation networks are decoupled, bridging layers deliver critical information from classification network to segmentation network. claim that exploiting few strong annotations substantially improves the accuracy of semantic segmentation while it reduces annotations efforts for supervision significantly. However, they rely on iterative training procedures, which are often ad-hoc and heuristic and increase complexity to reproduce results in general. Also, these approaches still need a fairly large number of strong annotations to achieve reliable performance. 3 Algorithm Overview Figure 1 presents the overall architecture of the proposed network. Our network is composed of three parts: classification network, segmentation network and bridging layers connecting the two networks. In this model, semantic segmentation is performed by separate but successive operations of classification and segmentation. Given an input image, classification network identifies labels associated with the image, and segmentation network produces pixel-wise figure-ground segmentation corresponding to each identified label. This formulation may suffer from loose connection between classification and segmentation, but we figure out this challenge by adding bridging layers between the two networks and delivering class-specific information from classification network to segmentation network. Then, it is possible to optimize the two networks using separate objective functions while the two decoupled tasks collaborate effectively to accomplish the final goal. Training our network is very straightforward. We assume that a large number of image-level annotations are available while there are only a few training images with segmentation annotations. Given these heterogeneous and unbalanced training data, we first learn the classification network using rich image-level annotations. Then, with the classification network fixed, we jointly optimize the bridging layers and the segmentation network using a small number of training examples with strong annotations. There are only a small number of strongly annotated training data, but we alleviate this challenging situation by generating many artificial training examples through data augmentation. The contributions and characteristics of the proposed algorithm are summarized below: • We propose a novel DNN architecture for semi-supervised semantic segmentation using heterogeneous annotations. The new architecture decouples classification and segmentation tasks, which enables us to employ pre-trained models for classification network and train only segmentation network and bridging layers using a few strongly annotated data. • The bridging layers construct class-specific activation maps, which are delivered from classification network to segmentation network. These maps provide strong priors for segmentation, and reduce search space dramatically for training and inference. • Overall training procedure is very simple since two networks are to be trained separately. Our algorithm does not infer segmentation labels of weakly annotated images through iterative heuristics1, which are common in semi-supervised learning techniques [8, 10]. The proposed algorithm provides a concept to make up for the lack of strongly annotated training data using a large number of weakly annotated data. This concept is interesting because the assump1Due to this property, our framework is different from standard semi-supervised learning but close to fewshot learning with heterogeneous annotations. Nonetheless, we refer to it as semi-supervised learning in this paper since we have a fraction of strongly annotated data in our training dataset but complete annotations of weak labels. Note that our level of supervision is similar to the semi-supervised learning case in [8]. 3 tion about the availability of training data is desirable for real situations. We estimate figure-ground segmentation maps only for the relevant classes identified by classification network, which improves scalability of algorithm in terms of the number of classes. Finally, our algorithm outperforms the comparable semi-supervised learning method with substantial margins in various settings. 4 Architecture This section describes the detailed configurations of the proposed network, including classification network, segmentation network and bridging layers between the two networks. 4.1 Classification Network The classification network takes an image x as its input, and outputs a normalized score vector S(x; θc) ∈RL representing a set of relevance scores of the input x based on the trained classification model θc for predefined L categories. The objective of classification network is to minimize error between ground-truths and estimated class labels, and is formally written as min θc X i ec(yi, S(xi; θc)), (1) where yi ∈{0, 1}L denotes the ground-truth label vector of the i-th example and ec(yi, S(xi; θc)) is classification loss of S(xi; θc) with respect to yi. We employ VGG 16-layer net [13] as the base architecture for our classification network. It consists of 13 convolutional layers, followed by rectification and optional pooling layers, and 3 fully connected layers for domain-specific projection. Sigmoid cross-entropy loss function is employed in Eq. (1), which is a typical choice in multi-class classification tasks. Given output scores S(xi; θc), our classification network identifies a set of labels Li associated with input image xi. The region in xi corresponding to each label l ∈Li is predicted by the segmentation network discussed next. 4.2 Segmentation Network The segmentation network takes a class-specific activation map gl i of input image xi, which is obtained from bridging layers, and produces a two-channel class-specific segmentation map M(gl i; θs) after applying softmax function, where θs is the model parameter of segmentation network. Note that M(gl i; θs) has foreground and background channels, which are denoted by Mf(gl i; θs) and Mb(gl i; θs), respectively. The segmentation task is formulated as per-pixel regression to groundtruth segmentation, which minimizes min θs X i es(zl i, M(gl i; θs)), (2) where zl i denotes the binary ground-truth segmentation mask for category l of the i-th image xi and es(zi, M(gl i; θs)) is the segmentation loss of Mf(gl i; θs)—or equivalently the segmentation loss of Mb(gl i; θs)—with respect to zl i. The recently proposed deconvolution network [12] is adopted for our segmentation network. Given an input activation map gl i corresponding to input image xi, the segmentation network generates a segmentation mask in the same size to xi by multiple series of operations of unpooling, deconvolution and rectification. Unpooling is implemented by importing the switch variable from every pooling layer in the classification network, and the number of deconvolutional and unpooling layers are identical to the number of convolutional and pooling layers in the classification network. We employ the softmax loss function to measure per-pixel loss in Eq. (2). Note that the objective function in Eq. (2) corresponds to pixel-wise binary classification; it infers whether each pixel belongs to the given class l or not. This is the major difference from the existing networks for semantic segmentation including [12], which aim to classify each pixel to one of the L predefined classes. By decoupling classification from segmentation and posing the objective of segmentation network as binary classification, our algorithm reduces the number of parameters 4 Figure 2: Examples of class-specific activation maps (output of bridging layers). We show the most representative channel for visualization. Despite significant variations in input images, the classspecific activation maps share similar properties. in the segmentation network significantly. Specifically, this is because we identify the relevant labels using classification network and perform binary segmentation for each of the labels, where the number of output channels in segmentation network is set to two—for foreground and background— regardless of the total number of candidate classes. This property is especially advantageous in our challenging scenario, where only a few pixel-wise annotations (typically 5 to 10 annotations per class) are available for training segmentation network. 4.3 Bridging Layers To enable the segmentation network described in Section 4.2 to produce the segmentation mask of a specific class, the input to the segmentation network should involve class-specific information as well as spatial information required for shape generation. To this end, we have additional layers underneath segmentation network, which is referred to as bridging layers, to construct the classspecific activation map gl i for each identified label l ∈Li. To encode spatial configuration of objects presented in image, we exploit outputs from an intermediate layer in the classification network. We take the outputs from the last pooling layer (pool5) since the activation patterns of convolution and pooling layers often preserve spatial information effectively while the activations in the higher layers tend to capture more abstract and global information. We denote the activation map of pool5 layer by fspat afterwards. Although activations in fspat maintain useful information for shape generation, they contain mixed information of all relevant labels in xi and we should identify class-specific activations in fspat additionally. For the purpose, we compute class-specific saliency maps using the back-propagation technique proposed in [14]. Let f (i) be the output of the i-th layer (i = 1, . . . , M) in the classification network. The relevance of activations in f (k) with respect to a specific class l is computed by chain rule of partial derivative, which is similar to error back-propagation in optimization, as f l cls = ∂Sl ∂f (k) = ∂f (M) ∂f (M−1) ∂f (M−1) ∂f (M−2) · · · ∂f (k+1) ∂f (k) , (3) where f l cls denotes class-specific saliency map and Sl is the classification score of class l. Intuitively, Eq. (3) means that the values in f l cls depend on how much the activations in f (k) are relevant to class l; this is measured by computing the partial derivative of class score Sl with respect to the activations in f (k). We back-propagate the class-specific information until pool5 layer. The class-specific activation map gl i is obtained by combining both fspat and f l cls. We first concatenate fspat and f l cls in their channel direction, and forward-propagate it through the fully-connected bridging layers, which discover the optimal combination of fspat and f l cls using the trained weights. The resultant class-specific activation map gl i that contains both spatial and class-specific information is given to segmentation network to produce a class-specific segmentation map. Note that the changes in gl i depend only on f l cls since fspat is fixed for all classes in an input image. 5 ℒ∗= {𝑝𝑒𝑟𝑠𝑜𝑛, 𝑡𝑎𝑏𝑙𝑒, 𝑝𝑙𝑎𝑛𝑡} 𝑀𝑝𝑒𝑟𝑠𝑜𝑛 𝑀𝑡𝑎𝑏𝑙𝑒 𝑀𝑝𝑙𝑎𝑛𝑡 L∗ i = {person, table, plant} Mf(gperson i ) Mf(gtable i ) Mf(gplant i ) Figure 3: Input image (left) and its segmentation maps (right) of individual classes. Figure 2 visualizes the examples of class-specific activation maps gl i obtained from several validation images. The activations from the images in the same class share similar patterns despite substantial appearance variations, which shows that the outputs of bridging layers capture class-specific information effectively; this property makes it possible to obtain figure-ground segmentation maps for individual relevant classes in segmentation network. More importantly, it reduces the variations of input distributions for segmentation network, which allows to achieve good generalization performance in segmentation even with a small number of training examples. For inference, we compute a class-specific activation map gl i for each identified label l ∈Li and obtain class-specific segmentation maps {M(gl i; θs)}∀l∈Li. In addition, we obtain M(g∗ i ; θs), where g∗ i is the activation map from the bridging layers for all identified labels. The final label estimation is given by identifying the label with the maximum score in each pixel out of {Mf(gl i; θs)}∀l∈Li and Mb(g∗ i ; θs). Figure 3 illustrates the output segmentation map of each gl i for xi, where each map identifies high response area given gl i successfully. 5 Training In our semi-supervised learning scenario, we have mixed training examples with weak and strong annotations. Let W = {1, ..., Nw} and S = {1, ..., Ns} denote the index sets of images with imagelevel and pixel-wise class labels, respectively, where Nw ≫Ns. We first train the classification network using the images in W by optimizing the loss function in Eq. (1). Then, fixing the weights in the classification network, we jointly train the bridging layers and the segmentation network using images in S by optimizing Eq. (2). For training segmentation network, we need to obtain classspecific activation map gl i from bridging layers using ground-truth class labels associated with xi, i ∈S. Note that we can reduce complexity in training by optimizing the two networks separately. Although the proposed algorithm has several advantages in training segmentation network with few training images, it would still be better to have more training examples with strong annotations. Hence, we propose an effective data augmentation strategy, combinatorial cropping. Let L∗ i denotes a set of ground-truth labels associated with image xi, i ∈S. We enumerate all possible combinations of labels in P(L∗ i ), where P(L∗ i ) denotes the powerset of L∗ i . For each P ∈P(L∗ i ) except empty set (P ̸= ∅), we construct a binary ground-truth segmentation mask zP i by setting the pixels corresponding to every label l ∈P as foreground and the rests as background. Then, we generate Np sub-images enclosing the foreground areas based on region proposal method [15] and random sampling. Through this simple data augmentation technique, we have Nt = Ns+Np· P i∈S 2|L∗ i | −1 training examples with strong annotations effectively, where Nt ≫Ns. 6 Experiments 6.1 Implementation Details Dataset We employ PASCAL VOC 2012 dataset [1] for training and testing of the proposed deep network. The dataset with extended annotations from [16], which contains 12,031 images with pixel-wise class labels, is used in our experiment. To simulate semi-supervised learning scenario, we divide the training images into two non-disjoint subsets—W with weak annotations only and S with strong annotations as well. There are 10,582 images with image-level class labels, which are used to train our classification network. We also construct training datasets with strongly annotated images; 6 Table 1: Evaluation results on PASCAL VOC 2012 validation set. # of strongs DecoupledNet WSSL-Small FoV [8] WSSL-Large-FoV [8] DecoupledNet-Str DeconvNet [12] Full 67.5 63.9 67.6 67.5 67.1 25 (×20 classes) 62.1 56.9 54.2 50.3 38.6 10 (×20 classes) 57.4 47.6 38.9 41.7 21.5 5 (×20 classes) 53.1 32.7 15.3 Table 2: Evaluation results on PASCAL VOC 2012 test set. Models bkg areo bike bird boat bottle bus car cat chair cow table dog horse mbk prsn plnt sheep sofa train tv mean DecoupledNet-Full 91.5 78.8 39.9 78.1 53.8 68.3 83.2 78.2 80.6 25.8 62.6 55.5 75.1 77.2 77.1 76.0 47.8 74.1 47.5 66.4 60.4 66.6 DecoupledNet-25 90.1 75.8 41.7 70.4 46.4 66.2 83.0 69.9 76.7 23.1 61.2 43.3 70.4 75.7 74.1 65.7 46.2 73.8 39.7 61.9 57.6 62.5 DecoupledNet-10 88.5 73.8 40.1 68.1 45.5 59.5 76.4 62.7 71.4 17.7 60.4 39.9 64.5 73.0 68.5 56.0 43.4 70.8 37.8 60.3 54.2 58.7 DecoupledNet-5 87.4 70.4 40.9 60.4 36.3 61.2 67.3 67.7 64.6 12.8 60.2 26.4 63.2 69.6 64.8 53.1 34.7 65.3 34.4 57.0 50.5 54.7 the number of images with segmentation labels per class is controlled to evaluate the impact of supervision level. In our experiment, three different cases—5, 10, or 25 training images with strong annotations per class—are tested to show the effectiveness of our semi-supervised framework. We evaluate the performance of the proposed algorithm on 1,449 validation images. Data Augmentation We employ different strategies to augment training examples in the two datasets with weak and strong annotations. For the images with weak annotations, simple data augmentation techniques such as random cropping and horizontal flipping are employed as suggested in [13]. We perform combinatorial cropping proposed in Section 5 for the images with strong annotations, where EdgeBox [15] is adopted to generate region proposals and the Np(= 200) sub-images are generated for each label combination. Optimization The proposed network is implemented based on Caffe library [17]. The standard Stochastic Gradient Descent (SGD) with momentum is employed for optimization, where all parameters are identical to [12]. We initialize the weights of the classification network using VGG 16-layer net pre-trained on ILSVRC [18] dataset. When we train the deep network with full annotations, the network converges after approximately 5.5K and 17.5K SGD iterations with mini-batches of 64 examples in training classification and segmentation networks, respectively; training takes 3 days (0.5 day for classification network and 2.5 days for segmentation network) in a single Nvidia GTX Titan X GPU with 12G memory. Note that training segmentation network is much faster in our semi-supervised setting while there is no change in training time of classification network. 6.2 Results on PASCAL VOC Dataset Our algorithm denoted by DecoupledNet is compared with two variations in WSSL [8], which is another algorithm based on semi-supervised learning with heterogeneous annotations. We also test the performance of DecoupledNet-Str2 and DeconvNet [12], which only utilize examples with strong annotations, to analyze the benefit of image-level weak annotations. All learned models in our experiment are based only on the training set (not including the validation set) in PASCAL VOC 2012 dataset. All algorithms except WSSL [8] report the results without CRF. Segmentation accuracy is measured by Intersection over Union (IoU) between ground-truth and predicted segmentation, and the mean IoU over 20 semantic categories is employed for the final performance evaluation. Table 1 summarizes quantitative results on PASCAL VOC 2012 validation set. Given the same amount of supervision, DecoupledNet presents substantially better performance even without any post-processing than WSSL [8], which is a directly comparable method. In particular, our algorithm has great advantage over WSSL when the number of strong annotations is extremely small. We believe that this is because DecoupledNet reduces search space for segmentation effectively by employing the bridging layers and the deep network can be trained with a smaller number of images with strong annotations consequently. Our results are even more meaningful since training procedure of DecoupledNet is very straightforward compared to WSSL and does not involve heuristic iterative procedures, which are common in semi-supervised learning methods. When there are only a small number of strongly annotated training data, our algorithm obviously outperforms DecoupledNet-Str and DeconvNet [12] by exploiting the rich information of weakly an2This is identical to DecoupledNet except that its classification and segmentation networks are trained with the same images, where image-level weak annotations are generated from pixel-wise segmentation annotations. 7 Figure 4: Semantic segmentation results of several PASCAL VOC 2012 validation images based on the models trained on a different number of pixel-wise segmentation annotations. notated images. It is interesting that DecoupledNet-Str is clearly better than DeconvNet, especially when the number of training examples is small. For reference, the best accuracy of the algorithm based only on the examples with image-level labels is 42.0% [7], which is much lower than our result with five strongly annotated images per class, even though [7] requires significant efforts for heuristic post-processing. These results show that even little strong supervision can improve semantic segmentation performance dramatically. Table 2 presents more comprehensive results of our algorithm in PASCAL VOC test set. Our algorithm works well in general and approaches to the empirical upper-bound fast with a small number of strongly annotated images. A drawback of our algorithm is that it does not achieve the state-ofthe-art performance [3, 11, 12] when the (almost3) full supervision is provided in PASCAL VOC dataset. This is probably because our method optimizes classification and segmentation networks separately although joint optimization of two objectives is more desirable. However, note that our strategy is more appropriate for semi-supervised learning scenario as shown in our experiment. Figure 4 presents several qualitative results from our algorithm. Note that our model trained only with five strong annotations per class already shows good generalization performance, and that more training examples with strong annotations improve segmentation accuracy and reduce label confusions substantially. Refer to our project website4 for more comprehensive qualitative evaluation. 7 Conclusion We proposed a novel deep neural network architecture for semi-supervised semantic segmentation with heterogeneous annotations, where classification and segmentation networks are decoupled for both training and inference. The decoupled network is conceptually appropriate for exploiting heterogeneous and unbalanced training data with image-level class labels and/or pixel-wise segmentation annotations, and simplifies training procedure dramatically by discarding complex iterative procedures for intermediate label inferences. Bridging layers play a critical role to reduce output space of segmentation, and facilitate to learn segmentation network using a handful number of segmentation annotations. Experimental results validate the effectiveness of our decoupled network, which outperforms existing semi- and weakly-supervised approaches with substantial margins. Acknowledgement This work was partly supported by the ICT R&D program of MSIP/IITP [B0101-15-0307; ML Center, B0101-15-0552; DeepView] and Samsung Electronics Co., Ltd. 3We did not include the validation set for training and have less training examples than the competitors. 4http://cvlab.postech.ac.kr/research/decouplednet/ 8 References [1] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The Pascal Visual Object Classes (VOC) challenge. IJCV, 88(2):303–338, 2010. [2] Jonathan Long, Evan Shelhamer, and Trevor Darrell. Fully convolutional networks for semantic segmentation. In CVPR, 2015. [3] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L Yuille. Semantic image segmentation with deep convolutional nets and fully connected CRFs. In ICLR, 2015. [4] Bharath Hariharan, Pablo Arbel´aez, Ross Girshick, and Jitendra Malik. Hypercolumns for object segmentation and fine-grained localization. In CVPR, 2015. [5] Bharath Hariharan, Pablo Arbel´aez, Ross Girshick, and Jitendra Malik. Simultaneous detection and segmentation. In ECCV, 2014. [6] Mohammadreza Mostajabi, Payman Yadollahpour, and Gregory Shakhnarovich. Feedforward semantic segmentation with zoom-out features. CVPR, 2015. [7] Pedro O. Pinheiro and Ronan Collobert. Weakly supervised semantic segmentation with convolutional networks. In CVPR, 2015. [8] George Papandreou, Liang-Chieh Chen, Kevin Murphy, and Alan L Yuille. Weakly-and semi-supervised learning of a DCNN for semantic image segmentation. In ICCV, 2015. [9] Deepak Pathak, Evan Shelhamer, Jonathan Long, and Trevor Darrell. Fully convolutional multi-class multiple instance learning. In ICLR, 2015. [10] Jifeng Dai, Kaiming He, and Jian Sun. BoxSup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. In ICCV, 2015. [11] Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, and Philip Torr. Conditional random fields as recurrent neural networks. In ICCV, 2015. [12] Hyeonwoo Noh, Seunghoon Hong, and Bohyung Han. Learning deconvolution network for semantic segmentation. In ICCV, 2015. [13] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [14] Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. In ICLR Workshop, 2014. [15] C Lawrence Zitnick and Piotr Doll´ar. Edge boxes: Locating object proposals from edges. In ECCV, 2014. [16] Bharath Hariharan, Pablo Arbel´aez, Lubomir Bourdev, Subhransu Maji, and Jitendra Malik. Semantic contours from inverse detectors. In ICCV, 2011. [17] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [18] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: a large-scale hierarchical image database. In CVPR, 2009. 9 | 2015 | 386 |
5,909 | Discrete R´enyi Classifiers Meisam Razaviyayn∗ meisamr@stanford.edu Farzan Farnia∗ farnia@stanford.edu David Tse∗ dntse@stanford.edu Abstract Consider the binary classification problem of predicting a target variable Y from a discrete feature vector X = (X1, . . . , Xd). When the probability distribution P(X, Y ) is known, the optimal classifier, leading to the minimum misclassification rate, is given by the Maximum A-posteriori Probability (MAP) decision rule. However, in practice, estimating the complete joint distribution P(X, Y ) is computationally and statistically impossible for large values of d. Therefore, an alternative approach is to first estimate some low order marginals of the joint probability distribution P(X, Y ) and then design the classifier based on the estimated low order marginals. This approach is also helpful when the complete training data instances are not available due to privacy concerns. In this work, we consider the problem of finding the optimum classifier based on some estimated low order marginals of (X, Y ). We prove that for a given set of marginals, the minimum Hirschfeld-Gebelein-R´enyi (HGR) correlation principle introduced in [1] leads to a randomized classification rule which is shown to have a misclassification rate no larger than twice the misclassification rate of the optimal classifier. Then, under a separability condition, it is shown that the proposed algorithm is equivalent to a randomized linear regression approach. In addition, this method naturally results in a robust feature selection method selecting a subset of features having the maximum worst case HGR correlation with the target variable. Our theoretical upper-bound is similar to the recent Discrete Chebyshev Classifier (DCC) approach [2], while the proposed algorithm has significant computational advantages since it only requires solving a least square optimization problem. Finally, we numerically compare our proposed algorithm with the DCC classifier and show that the proposed algorithm results in better misclassification rate over various UCI data repository datasets. 1 Introduction Statistical classification, a core task in many modern data processing and prediction problems, is the problem of predicting labels for a given feature vector based on a set of training data instances containing feature vectors and their corresponding labels. From a probabilistic point of view, this problem can be formulated as follows: given data samples (X1, Y 1), . . . , (Xn, Y n) from a probability distribution P(X, Y ), predict the target label ytest for a given test point X = xtest. Many modern classification problems are on high dimensional categorical features. For example, in the genome-wide association studies (GWAS), the classification task is to predict a trait of interest based on observations of the SNPs in the genome. In this problem, the feature vector X = (X1, . . . , Xd) is categorical with Xi ∈{0, 1, 2}. What is the optimal classifier leading to the minimum misclassification rate for such a classification problem with high dimensional categorical feature vectors? When the joint probability distribution of the random vector (X, Y ) is known, the MAP decision rule defined by δMAP ≜argmaxy P(Y = ∗Department of Electrical Engineering, Stanford University, Stanford, CA 94305. 1 y|X = x) achieves the minimum misclassification rate. However, in practice the joint probability distribution P(X, Y ) is not known. Moreover, estimating the complete joint probability distribution is not possible due to the curse of dimensionality. For example, in the above GWAS problem, the dimension of the feature vector X is d ≈3, 000, 000 which leads to the alphabet size of 33,000,000 for the feature vector X. Hence, a practical approach is to first estimate some low order marginals of P(X, Y ), and then use these low order marginals to build a classifier with low misclassification rate. This approach, which is the sprit of various machine learning and statistical methods [2–6], is also useful when the complete data instances are not available due to privacy concerns in applications such as medical informatics. In this work, we consider the above problem of building a classifier for a given set of low order marginals. First, we formally state the problem of finding the robust classifier with the minimum worst case misclassification rate. Our goal is to find a (possibly randomized) decision rule which has the minimum worst case misclassification rate over all probability distributions satisfying the given low order marginals. Then a surrogate objective function, which is obtained by the minimum HGR correlation principle [1], is used to propose a randomized classification rule. The proposed classification method has the worst case misclassification rate no more than twice the misclassification rate of the optimal classifier. When only pairwise marginals are estimated, it is shown that this classifier is indeed a randomized linear regression classifier on indicator variables under a separability condition. Then, we formulate a feature selection problem based on the knowledge of pairwise marginals which leads to the minimum misclassification rate. Our analysis provides a theoretical justification for using group lasso objective function for feature selection over the discrete set of features. Finally, we conclude by presenting numerical experiments comparing the proposed classifier with discrete Chebyshev classifier [2], Tree Augmented Naive Bayes [3], and Minimax Probabilistic Machine [4]. In short, the contributions of this work is as follows. • Providing a rigorous theoretical justification for using the minimum HGR correlation principle for binary classification problem. • Proposing a randomized classifier with misclassification rate no larger than twice the misclassification rate of the optimal classifier. • Introducing a computationally efficient method for calculating the proposed randomized classifier when pairwise marginals are estimated and a separability condition is satisfied. • Providing a mathematical justification based on maximal correlation for using group lasso problem for feature selection in categorical data. Related Work: The idea of learning structures in data through low order marginals/moments is popular in machine learning and statistics. For example, the maximum entropy principle [7], which is the spirit of the variational method in graphical models [5] and tree augmented naive Bayes [3], is based on the idea of fixing the marginal distributions and fitting a probabilistic model which maximizes the Shannon entropy. Although these methods fit a probabilistic model satisfying the low order marginals, they do not directly optimize the misclassification rate of the resulting classifier. Another related information theoretic approach is the minimum mutual information principle [8] which finds the probability distribution with the minimum mutual information between the feature vector and the target variable. This approach is closely related to the framework of this paper; however, unlike the minimum HGR principle, there is no known computationally efficient approach for calculating the probability distribution with the minimum mutual information. In the continuous setting, the idea of minimizing the worst case misclassification rate leads to the minimax probability machine [4]. This algorithm and its analysis is not easily extendible to the discrete scenario. The most related algorithm to this work is the recent Discrete Chebyshev Classifier (DCC) algorithm [2]. The DCC is based on the minimization of the worst case misclassification rate over the class of probability distributions with the given marginals of the form (Xi, Xj, Y ). Similar to our framework, the DCC method achieves the misclassification rate no larger than twice the misclassification rate of the optimum classifier. However, computation of the DCC classifier requires solving a non-separable non-smooth optimization problem which is computationally demanding, while the proposed algorithm results in a least squares optimization problem with a closed form solution. Furthermore, in contrast to [2] which only considers deterministic decision rules, in this work we 2 consider the class of randomized decision rules. Finally, it is worth noting that the algorithm in [2] requires tree structure to be tight, while our proposed algorithm works on non-tree structures as long as the separability condition is satisfied. 2 Problem Formulation Consider the binary classification problem with d discrete features X1, X2, . . . , Xd ∈X and a target variable Y ∈Y ≜{0, 1}. Without loss of generality, let us assume that X ≜{1, 2, . . . , m} and the data points (X, Y ) are coming from an underlying probability distribution ¯PX,Y (x, y). If the joint probability distribution ¯P(x, y) is known, the optimal classifier is given by the maximum a posteriori probability (MAP) estimator, i.e., by MAP(x) ≜argmaxy∈{0,1} ¯P(Y = y | X = x). However, the joint probability distribution ¯P(x, y) is often not known in practice. Therefore, in order to utilize the MAP rule, one should first estimate ¯P(x, y) using the training data instances. Unfortunately, estimating the joint probability distribution requires estimating the value of ¯P(X = x, Y = y) for all (x, y) ∈X d × Y which is intractable for large values of d. Therefore, as mentioned earlier, our approach is to first estimate some low order marginals of the joint probability distribution ¯P(·); and then utilize the minimax criterion for classification. Let C be the class of probability distributions satisfying the estimated marginals. For example, when only pairwise marginals of the ground-truth distribution ¯P is estimated, the set C is the class of distributions satisfying the given pairwise marginals, i.e., Cpairwise ≜ PX,Y (·, ·) PXi,Xj(xi, xj) = ¯PXi,Xj(xi, xj), PXi,Y (xi, y) = ¯PXi,Y (xi, y), ∀xi, xj ∈X, ∀y ∈Y, ∀i, j . (1) In general, C could be any class of probability distributions satisfying a set of estimated low order marginals. Let us also define δ to be a randomized classification rule with δ(x) = ( 0 with probability qx δ 1 with probability 1 −qx δ , for some qx δ ∈[0, 1], ∀x ∈X d. Given a randomized decision rule δ and a joint probability distribution PX,Y (x, y), we can extend P(·) to include our randomized decision rule. Then the misclassification rate of the decision rule δ, under the probability distribution P(·), is given by P(δ(X) ̸= Y ). Hence, under minimax criterion, we are looking for a decision rule δ∗which minimizes the worst case misclassification rate. In other words, the robust decision rule is given by δ∗∈argmin δ∈D max P∈C P (δ(X) ̸= Y ) , (2) where D is the set of all randomized decision rules. Notice that the optimal decision rule δ∗may not be unique in general. 3 Worst Case Error Minimization In this section, we propose a surrogate objective for (2) which leads to a decision rule with misclassification rate no larger than twice of the optimal decision rule δ∗. Later we show that the proposed surrogate objective is connected to the minimum HGR principle [1]. Let us start by rewriting (2) as an optimization problem over real valued variables. Notice that each probability distribution PX,Y (·, ·) can be represented by a probability vector p = [px,y | (x, y) ∈ X d × Y] ∈R2md with px,y = P(X = x, Y = y) and P x,y px,y = 1. Similarly, every randomized rule δ can be represented by a vector qδ = [qx δ | x ∈X d] ∈Rmd. Adopting these notations, the set C can be rewritten in terms of the probability vector p as C ≜ p Ap = b, 1T p = 1, p ≥0 , 3 where the system of linear equations Ap = b represents all the low order marginal constraints in B; and the notation 1 denotes the vector of all ones. Therefore, problem (2) can be reformulated as q∗ δ ∈argmin 0≤qδ≤1 max p∈C X x (qx δ px,1 + (1 −qx δ )px,0) , (3) where px,0 and px,1 denote the elements of the vector p corresponding to the probability values P(X = x, Y = 0) and P(X = x, Y = 1), respectively. The simple application of the minimax theorem [9] implies that the saddle point of the above optimization problem exists and moreover, the optimal decision rule is a MAP rule for a certain probability distribution P∗∈C. In other words, there exists a pair (δ∗, P∗) for which P(δ∗(X) ̸= Y ) ≤P∗(δ∗(X) ̸= Y ), ∀P ∈C and P∗(δ(X) ̸= Y ) ≥P∗(δ∗(X) ̸= Y ), ∀δ ∈D. Although the above observation characterizes the optimal decision rule to some extent, it does not provide a computationally efficient approach for finding the optimal decision rule. Notice that it is NP-hard to verify the existence of a probability distribution satisfying a given set of low order marginals [10]. Based on this observation and the result in [11], we conjecture that in general, solving (2) is NP-hard in the number variables and the alphabet size even when the set C is nonempty. Hence, here we focus on developing a framework to find an approximate solution of (2). Let us continue by utilizing the minimax theorem [9] and obtain the worst case probability distribution in (3) by p∗∈argmaxp∈C min0≤qδ≤1 P x (qx δ px,1 + (1 −qx δ )px,0) , or equivalently, p∗∈argmax p∈C X x min {px,0 , px,1} . (4) Despite convexity of the above problem, there are two sources of hardness which make the problem intractable for moderate and large values of d. Firstly, the objective function is non-smooth. Secondly, the number of optimization variables is 2md and grows exponentially with the alphabet size. To deal with the first issue, notice that the function inside the summation is the max-min fairness objective between the two quantities px,1 and px,0. Replacing this objective with the harmonic average leads to the following smooth convex optimization problem: ep ∈argmax p∈C X x px,1px,0 px,1 + px,0 . (5) It is worth noting that the harmonic mean of the two quantities is intuitively a reasonable surrogate for the original objective function since px,1px,0 px,1 + px,0 ≤min {px,0 , px,1} ≤2px,1px,0 px,1 + px,0 . (6) Although this inequality suggests that the objective functions in (5) and (4) are close to each other, it is not clear whether the distribution ep leads to any classification rule having low misclassification rate for all distributions in C. In order to obtain a classification rule from ep, the first naive approach is to use MAP decision rule based on ep. However, the following result shows that this decision rule does not achieve the factor two misclassification rate obtained in [2]. Theorem 1 Let us define eδmap(x) ≜argmaxy∈Y epx,y with the worst case error probability ee map ≜maxP∈C P eδ map(X) ̸= Y . Then, e∗≤ee map ≤4e∗, where e∗is the worst case misclassification rate of the optimal decision rule δ∗, that is, e∗≜maxP∈C P (δ∗(X) ̸= Y ) . Proof The proof is similar to the proof of next theorem and hence omitted here. Next we show that, surprisingly, one can obtain a randomized decision rule based on the solution of (5) which has a misclassification rate no larger than twice of the optimal decision rule δ∗. Given ep as the optimal solution of (5), define the random decision rule eδ as eδ(x) = 0 with probability ep 2 x,0 ep 2 x,0+ep 2 x,1 1 with probability ep 2 x,1 ep 2 x,0+ep 2 x,1 (7) 4 Let ˜e be the worst case classification error of the decision rule eδ, i.e., ee ≜max P∈C P eδ(X) ̸= Y . Clearly, e∗≤ee according to the definition of the optimal decision rule e∗. The following theorem shows that ee is also upper-bounded by twice of the optimal misclassification rate e∗. Theorem 2 Define θ ≜max p∈C X x px,1px,0 px,1 + px,0 (8) Then, θ ≤ee ≤2θ ≤2e∗. In other words, the worst case misclassification rate of the decision rule eδ is at most twice the optimal decision rule δ∗. Proof The proof is relegated to the supplementary materials. So far, we have resolved the non-smoothness issue in solving (4) by using a surrogate objective function. In the next section, we resolve the second issue by establishing the connection between problem (5) and the minimum HGR correlation principle [1]. Then, we use the existing result in [1] to develop a computationally efficient approach for calculating the decision rule ˜δ(·) for Cpairwise. 4 Connection to Hirschfeld-Gebelein-R´enyi Correlation A commonplace approach to infer models from data is to employ the maximum entropy principle [7]. This principle states that, given a set of constraints on the ground-truth distribution, the distribution with the maximum (Shannon) entropy under those constraints is a proper representer of the class. To extend this rule to the classification problem, the authors in [8] suggest to pick the distribution maximizing the target entropy conditioned to features, or equivalently minimizing mutual information between target and features. Unfortunately, this approach does not lead to a computationally efficient approach for model fitting and there is no guarantee on the misclassification rate of the resulting classifier. Here we study an alternative approach of minimum HGR correlation principle [1]. This principle suggests to pick the distribution in C minimizing HGR correlation between the target variable and features. The HGR correlation coefficient between the two random objects X and Y , which was first introduced by Hirschfeld and Gebelein [12, 13] and then studied by R´enyi [14], is defined as ρ(X, Y ) ≜supf,g E [f(X)g(Y )] , where the maximization is taken over the class of all measurable functions f(·) and g(·) with E[f(X)] = E[g(Y )] = 0 and E[f 2(X)] = E[g2(Y )] = 1. The HGR correlation coefficient has many desirable properties. For example, it is normalized to be between 0 and 1. Furthermore, this coefficient is zero if and only if the two random variables are independent; and it is one if there is a strict dependence between X and Y . For other properties of the HGR correlation coefficient see [14,15] and the references therein. Lemma 1 Assume the random variable Y is binary and define q ≜P(Y = 0). Then, ρ(X, Y ) = s 1 − 1 q(1 −q) X x PX,Y (x, 0)PX,Y (x, 1) PX,Y (x, 0) + PX,Y (x, 1) , Proof The proof is relegated to the supplementary material. This lemma leads to the following observation. Observation: Assume the marginal distribution P(Y = 0) and P(Y = 1) is fixed for any distribution P ∈C. Then, the distribution in C with the minimum HGR correlation between X and Y is the distribution eP obtained by solving (5). In other words, ρ(X, Y ; eP) ≤ρ(X, Y ; P), ∀P ∈C, where ρ(X, Y ; P) denotes the HGR correlation coefficient under the probability distribution P. Based on the above observation, from now on, we call the classifier eδ(·) in (7) as the “R´enyi classifier”. In the next section, we use the result of the recent work [1] to compute the R´enyi classifier eδ(·) for a special class of marginals C = Cpairwise. 5 5 Computing R´enyi Classifier Based on Pairwise Marginals In many practical problems, the number of features d is large and therefore, it is only computationally tractable to estimate marginals of order at most two. Hence, hereafter, we restrict ourselves to the case where only the first and second order marginals of the distribution ¯P is estimated, i.e., C = Cpairwise. In this scenario, in order to predict the output of the R´enyi classifier for a given data point x, one needs to find the value of epx,0 and epx,1. Next, we state a result from [1] which sheds light on the computation of epx,0 and epx,1. To state the theorem, we need the following definitions: Let the matrix Q ∈Rdm×dm and the vector d ∈Rdm×1 be defined through their entries as Qmi+k,mj+ℓ= ¯P(Xi+1 = k, Xj+1 = ℓ), dmi+k = ¯P(Xi+1 = k, Y = 1) −¯P(Xi+1 = k, Y = 0), for every i, j = 0, . . . , d −1 and k, ℓ= 1, . . . , m. Also define the function h(z) : Rmd×1 7→R as h(z) ≜Pd i=1 max{zmi−m+1, zmi−m+2, . . . , zmi}. Then, we have the following theorem. Theorem 3 (Rephrased from [1]) Assume Cpairwise ̸= ∅. Let γ ≜ min z∈Rmd×1 zT Qz −dT z + 1 4. (9) Then, q 1 − γ q(1−q) ≤minP∈Cpairwise ρ(X, Y ; P), where the inequality holds with equality if and only if there exists a solution z∗to (9) such that h(z∗) ≤1 2 and h(−z∗) ≤1 2; or equivalently, if and only if the following separability condition is satisfied for some P ∈Cpairwise. EP[Y |X = x] = d X i=1 ζi(xi), ∀x ∈X d, (10) for some functions ζ1, . . . , ζd. Moreover, if the separability condition holds with equality, then eP(Y = y X = (x1, . . . , xd)) = 1 2 −(−1)y d X i=1 z∗ (i−1)m+xi. (11) Combining the above theorem with the equality eP2(Y = 0, X = x) eP2(Y = 0, X = x) + eP2(Y = 1, X = x) = eP2(Y = 0 | X = x) eP2(Y = 0 | X = x) + eP2(Y = 1 | X = x) implies that the decision rule eδ and eδ map can be computed in a computationally efficient manner under the separability condition. Notice that when the separability condition is not satisfied, the approach proposed in this section would provide a classification rule whose error rate is still bounded by 2γ. However, this error rate does no longer provide a 2-factor approximation gap. It is also worth mentioning that the separability condition is a property of the class of distribution Cpairwise and is independent of the classifier at hand. Moreover, this condition is satisfied with a positive measure over the simplex of the all probability distributions, as discussed in [1]. Two remarks are in order: Inexact knowledge of marginal distribution: The optimization problem (9) is equivalent to solving the stochastic optimization problem z∗= argmin z E h WT z −C 2i , where W ∈{0, 1}md×1 is a random vector with Wm(i−1)+k = 1 if Xi = k in the and Wm(i−1)+k = 0, otherwise. Also define the random variable C ∈{−1 2, 1 2} with C = 1 2 if the random variable Y = 1 and C = −1 2, otherwise. Here the expectation could be calculated with respect to any distribution in C. Hence, in practice, the above optimization problem can be estimated using Sample Average Approximation (SAA) method [16,17] through the optimization problem bz = argmin z 1 n n X i=1 (wi)T z −ci2 , 6 where (wi, ci) corresponds to the i-th training data point (xi, yi). Clearly, this is a least square problem with a closed form solution. Notice that in order to bound the SAA error and avoid overfitting, one could restrict the search space for bz [18]. This could also be done using regularizers such as ridge regression by solving bz ridge = argmin z 1 n n X i=1 (wi)T z −ci2 + λridge∥z∥2 2. Beyond pairwise marginals: When d is small, one might be interested in estimating higher order marginals for predicting Y . In this scenario, a simple modification for the algorithm is to define the new set of feature random variables n e Xij = (Xi, Xj) | i ̸= j o ; and apply the algorithm to the new set of feature variables. It is not hard to see that this approach utilizes the marginal information P(Xi, Xj, Xk, Xℓ) and P(Xi, Xj, Y ). 6 Robust R´enyi Feature Selection The task of feature selection for classification purposes is to preselect a subset of features for use in model fitting in prediction. Shannon mutual information, which is a measure of dependence between two random variables, is used in many recent works as an objective for feature selection [19, 20]. In these works, the idea is to select a small subset of features with maximum dependence with the target variable Y . In other words, the task is to find a subset of variables S ⊆{1, . . . , d} with |S| ≤k based on the following optimization problem SMI ≜argmax S⊆{1,...,d} I(XS; Y ), (12) where XS ≜(Xi)i∈S and I (XS; Y ) denotes the mutual information between the random variable XS and Y . Almost all of the existing approaches for solving (12) are based on heuristic approaches and of greedy nature which aim to find a sub-optimal solution of (12). Here, we suggest to replace mutual information with the maximal correlation. Furthermore, since estimating the joint distribution of X and Y is computationally and statistically impossible for large number of features d, we suggest to estimate some low order marginals of the groundtruth distribution ¯P(X, Y ) and then solve the following robust R´enyi feature selection problem: SRFS ≜argmax S⊆{1,...,d} min P∈C ρ(XS, Y ; P). (13) When only pairwise marginals are estimated from the training data, i.e., C = Cpairwise, maximizing the lower-bound q 1 − γ q(1−q) instead of (13) leads to the following optimization problem bS RFS ≜argmax |S|≤k s 1 − 1 q(1 −q) min z∈ZS zT Qz −dT z + 1 4, or equivalently, bS RFS ≜argmin |S|≤k min z∈ZS zT Qz −dT z, where ZS ≜ z ∈Rmd Pm k=1 |zmi−m+k| = 0, ∀i /∈S . This problem is of combinatorial nature. Howevre, using the standard group Lasso regularizer leads to the feature selection procedure in Algorithm 1. Algorithm 1 Robust R´enyi Feature Selection Choose a regularization parameter λ > 0 and define h(z) ≜Pd i=1 max{zmi−m+1, . . . , zmi}. Let bzRFS ∈argminz zT Qz −dT z + λh(|z|). Set S = {i | Pm k=1 |zRFS mi−m+k| > 0}. Notice that, when the pairwise marginals are estimated from a set of training data points, the above feature selection procedure is equivalent to applying the group Lasso regularizer to the standard linear regression problem over the domain of indicator variables. Our framework provides a justification for this approach based on the robust maximal correlation feature selection problem (13). 7 Remark 1 Another natural approach to define the feature selection procedure is to select a subset of features S by minimizing the worst case classification error, i.e., solving the following optimization problem min |S|≤k min δ∈DS max P∈C P(δ(X) ̸= Y ), (14) where DS is the set of randomized decision rules which only uses the feature variables in S. Define F(S) ≜ minδ∈DS maxP∈C P(δ(X) ̸= Y ). It can be shown that F(S) ≤ min|S|≤k minz∈ZS zT Qz −dT z + 1 4. Therefore, another justification for Algorithm 1 is to minimize an upper-bound of F(S) instead of itself. Remark 2 Alternating Direction Method of Multipliers (ADMM) algorithm [21] can be used for solving the optimization problem in Algorithm 1; see the supplementary material for more details. 7 Numerical Results We evaluated the performance of the R´enyi classifiers eδ and eδ map on five different binary classification datasets from the UCI machine learning data repository. The results are compared with five different benchmarks used in [2]: Discrete Chebyshev Classifier [2], greedy DCC [2], Tree Augmented Naive Bayes [3], Minimax Probabilistic Machine [4], and support vector machines (SVM). In addition to the classifiers eδ and eδ map which only use pairwise marginals, we also use higher order marginals in eδ2 and eδ map 2 . These classifiers are obtained by defining the new feature variables { e Xij = (Xi, Xj)} as discussed in section 5. Since in this scenario, the number of features is large, we combine our R´enyi classifier with the proposed group lasso feature selection. In other words, we first select a subset of { e Xij} and then find the maximum correlation classifier for the selected features. The value of λridge and λ is determined through cross validation. The results are averaged over 100 Monte Carlo runs each using 70% of the data for training and the rest for testing. The results are summarized in the table below where each number shows the percentage of the error of each method. The boldface numbers denote the best performance on each dataset. As can be seen in this table, in four of the tested datasets, at least one of the proposed methods outperforms the other benchmarks. Furthermore, it can be seen that the classifier eδmap on average performs better than ˜δ. This fact could be due to the specific properties of the underlying probability distribution in each dataset. Datasets eδmap eδ eδmap 2 eδ2 eδmap FS,2 eδFS,2 DCC gDCC MPM TAN SVM adult 17 21 16 20 16 20 18 18 22 18 22 credit 13 16 16 17 16 17 14 13 13 17 16 kr-vs-kp 5 10 5 14 5 14 10 10 5 7 3 promoters 6 16 3 4 3 4 5 3 6 44 9 votes 3 4 3 4 2 4 3 3 4 8 5 In order to evaluate the computational efficiency of the R´enyi classifier, we compare its running time with SVM over the synthetic data set with d = 10, 000 features and n = 200 data points. Each feature Xi is generated by i.i.d. Bernoulli distribution with P(Xi = 1) = 0.7. The target variable y is generated by y = sign(αT X + n) with n ∼N(0, 1); and α ∈Rd is generated with 30% nonzero elements each drawn from standard Gaussian distribution N(0, 1). The results are averaged over 1000 Monte-Carlo runs of generating the data set and use 85% of the data points for training and 15% for test. The R´enyi classifier is obtained by gradient descent method with regularizer λridge = 104. The numerical experiment shows 19.7% average misclassification rate for SVM and 19.9% for R´enyi classifier. However, the average training time of the R´enyi classifier is 0.2 seconds while the training time of SVM (with Matlab SVM command) is 1.25 seconds. Acknowledgments: The authors are grateful to Stanford University supporting a Stanford Graduate Fellowship, and the Center for Science of Information (CSoI), an NSF Science and Technology Center under grant agreement CCF-0939370 , for the support during this research. 8 References [1] F. Farnia, M. Razaviyayn, S. Kannan, and D. Tse. Minimum HGR correlation principle: From marginals to joint distribution. arXiv preprint arXiv:1504.06010, 2015. [2] E. Eban, E. Mezuman, and A. Globerson. Discrete chebyshev classifiers. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1233–1241, 2014. [3] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network classifiers. Machine learning, 29(2-3):131–163, 1997. [4] G. R. G. Lanckriet andE. L. Ghaoui, C. Bhattacharyya, and M. I. Jordan. A robust minimax approach to classification. The Journal of Machine Learning Research, 3:555–582, 2003. [5] M. I. Jordan, Z. Ghahramani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. Machine learning, 37(2):183–233, 1999. [6] T. Roughgarden and M. Kearns. Marginals-to-models reducibility. In Advances in Neural Information Processing Systems, pages 1043–1051, 2013. [7] E. T. Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957. [8] A. Globerson and N. Tishby. The minimum information principle for discriminative learning. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 193–200. AUAI Press, 2004. [9] M. Sion. On general minimax theorems. Pacific J. Math, 8(1):171–176, 1958. [10] J. De Loera and S. Onn. The complexity of three-way statistical tables. SIAM Journal on Computing, 33(4):819–836, 2004. [11] D. Bertsimas and J. Sethuraman. Moment problems and semidefinite optimization. In Handbook of semidefinite programming, pages 469–509. Springer, 2000. [12] H. O. Hirschfeld. A connection between correlation and contingency. In Mathematical Proceedings of the Cambridge Philosophical Society, volume 31, pages 520–524. Cambridge Univ. Press, 1935. [13] H. Gebelein. Das statistische problem der korrelation als variations-und eigenwertproblem und sein zusammenhang mit der ausgleichsrechnung. ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift f¨ur Angewandte Mathematik und Mechanik, 21(6):364–379, 1941. [14] A. R´enyi. On measures of dependence. Acta mathematica hungarica, 10(3):441–451, 1959. [15] V. Anantharam, A. Gohari, S. Kamath, and C. Nair. On maximal correlation, hypercontractivity, and the data processing inequality studied by Erkip and Cover. arXiv preprint arXiv:1304.6133, 2013. [16] A. Shapiro, D. Dentcheva, and A. Ruszczy´nski. Lectures on stochastic programming: modeling and theory, volume 16. SIAM, 2014. [17] A. Shapiro. Monte carlo sampling methods. Handbooks in operations research and management science, 10:353–425, 2003. [18] S. M. Kakade, K. Sridharan, and A. Tewari. On the complexity of linear prediction: Risk bounds, margin bounds, and regularization. In Advances in neural information processing systems, pages 793–800, 2009. [19] H. Peng, F. Long, and C. Ding. Feature selection based on mutual information criteria of maxdependency, max-relevance, and min-redundancy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(8):1226–1238, 2005. [20] R. Battiti. Using mutual information for selecting features in supervised neural net learning. IEEE Transactions on Neural Networks, 5(4):537–550, 1994. [21] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends R⃝in Machine Learning, 3(1):1–122, 2011. 9 | 2015 | 387 |
5,910 | Preconditioned Spectral Descent for Deep Learning David E. Carlson,1 Edo Collins,2 Ya-Ping Hsieh,2 Lawrence Carin,3 Volkan Cevher2 1 Department of Statistics, Columbia University 2 Laboratory for Information and Inference Systems (LIONS), EPFL 3 Department of Electrical and Computer Engineering, Duke University Abstract Deep learning presents notorious computational challenges. These challenges include, but are not limited to, the non-convexity of learning objectives and estimating the quantities needed for optimization algorithms, such as gradients. While we do not address the non-convexity, we present an optimization solution that exploits the so far unused “geometry” in the objective function in order to best make use of the estimated gradients. Previous work attempted similar goals with preconditioned methods in the Euclidean space, such as L-BFGS, RMSprop, and ADAgrad. In stark contrast, our approach combines a non-Euclidean gradient method with preconditioning. We provide evidence that this combination more accurately captures the geometry of the objective function compared to prior work. We theoretically formalize our arguments and derive novel preconditioned non-Euclidean algorithms. The results are promising in both computational time and quality when applied to Restricted Boltzmann Machines, Feedforward Neural Nets, and Convolutional Neural Nets. 1 Introduction In spite of the many great successes of deep learning, efficient optimization of deep networks remains a challenging open problem due to the complexity of the model calculations, the non-convex nature of the implied objective functions, and their inhomogeneous curvature [6]. It is established both theoretically and empirically that finding a local optimum in many tasks often gives comparable performance to the global optima [4], so the primary goal is to find a local optimum quickly. It is speculated that an increase in computational power and training efficiency will drive performance of deep networks further by utilizing more complicated networks and additional data [14]. Stochastic Gradient Descent (SGD) is the most widespread algorithm of choice for practitioners of machine learning. However, the objective functions typically found in deep learning problems, such as feed-forward neural networks and Restricted Boltzmann Machines (RBMs), have inhomogeneous curvature, rendering SGD ineffective. A common technique for improving efficiency is to use adaptive step-size methods for SGD [25], where each layer in a deep model has an independent step-size. Quasi-Newton methods have shown promising results in networks with sparse penalties [16], and factorized second order approximations have also shown improved performance [18]. A popular alternative to these methods is to use an element-wise adaptive learning rate, which has shown improved performance in ADAgrad [7], ADAdelta [30], and RMSprop [5]. The foundation of all of the above methods lies in the hope that the objective function can be wellapproximated by Euclidean (e.g., Frobenius or ℓ2) norms. However, recent work demonstrated that the matrix of connection weights in an RBM has a tighter majorization bound on the objective function with respect to the Schatten-∞norm compared to the Frobenius norm [1]. A majorizationminimization approach with the non-Euclidean majorization bound leads to an algorithm denoted as Stochastic Spectral Descent (SSD), which sped up the learning of RBMs and other probabilistic 1 models. However, this approach does not directly generalize to other deep models, as it can suffer from loose majorization bounds. In this paper, we combine recent non-Euclidean gradient methods with element-wise adaptive learning rates, and show their applicability to a variety of models. Specifically, our contributions are: i) We demonstrate that the objective function in feedforward neural nets is naturally bounded by the Schatten-∞norm. This motivates the application of the SSD algorithm developed in [1], which explicitly treats the matrix parameters with matrix norms as opposed to vector norms. ii) We develop a natural generalization of adaptive methods (ADAgrad, RMSprop) to the nonEuclidean gradient setting that combines adaptive step-size methods with non-Euclidean gradient methods. These algorithms have robust tuning parameters and greatly improve the convergence and the solution quality of SSD algorithm via local adaptation. We denote these new algorithms as RMSspectral and ADAspectral to mark the relationships to Stochastic Spectral Descent and RMSprop and ADAgrad. iii) We develop a fast approximation to our algorithm iterates based on the randomized SVD algorithm [9]. This greatly reduces the per-iteration overhead when using the Schatten-∞norm. iv) We empirically validate these ideas by applying them to RBMs, deep belief nets, feedforward neural nets, and convolutional neural nets. We demonstrate major speedups on all models, and demonstrate improved fit for the RBM and the deep belief net. We denote vectors as bold lower-case letters, and matrices as bold upper-case letters. Operations ⊙ and ⊘denote element-wise multiplication and division, and √ X the element-wise square root of X. 1 denotes the matrix with all 1 entries. ||x||p denotes the standard ℓp norm of x. ||X||Sp denotes the Schatten-p norm of X, which is ||s||p with s the singular values of X. ||X||S∞is the largest singular value of X, which is also known as the matrix 2-norm or the spectral norm. 2 Preconditioned Non-Euclidean Algorithms We first review non-Euclidean gradient descent algorithms in Section 2.1. Section 2.2 motivates and discusses preconditioned non-Euclidean gradient descent. Dynamic preconditioners are discussed in Section 2.3, and fast approximations are discussed in Section 2.4. 2.1 Non-Euclidean Gradient Descent Unless otherwise mentioned, proofs for this section may be found in [13]. Consider the minimization of a closed proper convex function F(x) with Lipschitz gradient ||∇F(x) −∇F(y)||q ≤Lp||x − y||p, ∀x, y,where p and q are dual to each other, and Lp > 0 is the smoothness constant. This Lipschitz gradient implies the following majorization bound, which is useful in optimization: F(y) ≤F(x) + ⟨∇F(x), y −x⟩+ Lp 2 ||y −x||2 p. (1) A natural strategy to minimize F(x) is to iteratively minimize the right-hand side of (1). Defining the #-operator as s# ∈arg maxx ⟨s, x⟩−1 2||x||2 p , this approach yields the algorithm: xk+1 = xk − 1 Lp [∇F(xk)]# , where k is the iteration count. (2) For p = q = 2, (2) is simply gradient descent, and s# = s. In general, (2) can be viewed as gradient descent in a non-Euclidean norm. To explore which norm ||x||p leads to the fastest convergence, we note the convergence rate of (2) is F(xk) −F(x∗) = O( Lp||x0−x∗||2 p k ), where x∗is a minimizer of F(·). If we have an Lp such that (1) holds and Lp||x0−x∗||2 p ≪L2||x0−x∗||2 2, then (2) can lead to superior convergence. One such example is presented in [13], where the authors proved that L∞||x0 −x∗||2 ∞improves a dimensiondependent factor over gradient descent for a class of problems in computer science. Moreover, they showed that the algorithm in (2) demands very little computational overhead for their problems, and hence || · ||∞is favored over || · ||2. 2 s1 s2 Gradient 0 10 20 0 5 10 15 20 s1 s2 Preconditioned 0 10 20 0 5 10 15 20 s1 s2 Norm Shape 0 10 20 0 5 10 15 20 ||.||2 F ||.||2 S ∞ Figure 1: Updates from parameters Wk for a multivariate logistic regression. (Left) 1st order approximation error at parameter Wk + s1u1v1 + s2u2v2, with {u1, u2, v1, v2} singular vectors of ∇Wf(W). (Middle) 1st order approximation error at parameter Wk + s1 ˜u1˜v1 + s2 ˜u2˜v2, with {˜u1, ˜u2, ˜v1, ˜v2} singular vectors of √ D ⊙∇Wf(W) with D a preconditioner matrix. (Right) Shape of the error implied by Frobenius norm and the Schatten-∞norm. After preconditioning, the error surface matches the shape implied by the Schatten-∞norm and not the Frobenius norm. As noted in [1], for the log-sum-exp function, lse(α) = log PN i=1 ωi exp(αi), the constant L2 is ≤1/2 and Ω(1/ log(N)) whereas the constant L∞is ≤1. If α are (possibly dependent) N zero mean sub-Gaussian random variables, the convergence for the log-sum-exp objective function is improved by at least N log2 N (see Supplemental Section A.1 for details). As well, non-Euclidean gradient descent can be adapted to the stochastic setting [2]. The log-sum-exp function reoccurs frequently in the cost function of deep learning models. Analyzing the majorization bounds that are dependent on the log-sum-exp function with respect to the model parameters in deep learning reveals majorization functions dependent on the Schatten-∞ norm. This was shown previously for the RBM in [1], and we show a general approach in Supplemental Section A.2 and specific results for feed-forward neural nets in Section 3.2. Hence, we propose to optimize these deep networks with the Schatten-∞norm. 2.2 Preconditioned Non-Euclidean Gradient Descent It has been established that the loss functions of neural networks exhibit pathological curvature [19]: the loss function is essentially flat in some directions, while it is highly curved in others. The regions of high curvature dominate the step-size in gradient descent. A solution to the above problem is to rescale the parameters so that the loss function has similar curvature along all directions. The basis of recent adative methods (ADAgrad, RMSprop) is in preconditioned gradient descent, with iterates xk+1 = xk −ϵkDk −1∇F(xk). (3) We restrict without loss of generality the preconditioner Dk to a positive definite diagonal matrix and ϵk > 0 is a chosen step-size. Letting ⟨y, x⟩D ≜⟨y, Dx⟩and ||x||2 D ≜⟨x, x⟩D, we note that the iteration in 3 corresponds to the minimizer of ˜F(y) ≜F(xk) + ⟨∇F(xk), y −xk⟩+ 1 2ϵk ||y −xk||2 Dk. (4) Consequently, for (3) to perform well, ˜F(y) has to either be a good approximation or a tight upper bound of F(y), the true function value. This is equivalent to saying that the first order approximation error F(y)−F(xk)−⟨∇F(xk), y−xk⟩is better approximated by the scaled Euclidean norm. The preconditioner Dk controls the scaling, and the choice of Dk depends on the objective function. As we are motivated to use Schatten-∞norms for our models, the above reasoning leads us to consider a variable metric non-Euclidean approximation. For a matrix X, let us denote D to be an element-wise preconditioner. Note that D is not a diagonal matrix in this case. Because the operations here are element-wise, this would correspond to the case above with a vectorized form of X and a preconditioner of diag(vec(D)). Let ||X||D,S∞= || √ D ⊙X||S∞. We consider the following surrogate of F, F(Y) ≃F(Xk) + ⟨∇F(Xk), Y −Xk⟩+ 1 2ϵk ||Y −Xk||2 Dk,S∞. (5) 3 Using the #-operator from Section 2.1, the minimizer of (5) takes the form (see Supplementary Section C for the proof): Xk+1 = Xk −ϵk[∇F(xk) ⊘ p Dk]# ⊘ p Dk. (6) We note that classification with a softmax link naturally operates on the Schatten-∞norm. As an illustrative example of the applicability of this norm, we show the first order approximation error for the objective function in this model, where the distribution on the class y depends on covariates x, y ∼categorical(softmax(Wx)). Figure 1 (left) shows the error surfaces on W without the preconditioner, where the uneven curvature will lead to poor updates. The Jacobi (diagonal of the Hessian) preconditioned error surface is shown in Figure 1 (middle), where the curvature has been made homogeneous. However, the shape of the error does not follow the Euclidean (Frobenius) norm, but instead the geometry from the Schatten-∞norm shown in Figure 1 (right). Since many deep networks use the softmax and log-sum-exp to define a probability distribution over possible classes, adapting to the the inherent geometry of this function can benefit learning in deeper layers. 2.3 Dynamic Learning of the Preconditioner Our algorithms amount to choosing an ϵk and preconditioner Dk. We propose to use the preconditioner from ADAgrad [7] and RMSprop [5]. These preconditioners are given below: Dk+1 = λ1 + p Vk+1, Vk+1 = αVk + (1 −α) (∇f(Xk)) ⊙(∇f(Xk)), RMSprop Vk+1 = Vk + (∇f(Xk)) ⊙(∇f(Xk)), ADAgrad . The λ term is a tuning parameter controlling the extremes of the curvature in the preconditioner. The updates in ADAgrad have provably improved regret bound guarantees for convex problems over gradient descent with the iterates in (3) [7]. ADAgrad and ADAdelta [30] have been applied successfully to neural nets. The updates in RMSprop were shown in [5] to approximate the equilibration preconditioner, and have also been successfully applied in autoencoders and supervised neural nets. Both methods require a tuning parameter λ, and RMSprop also requires a term α that controls historical smoothing. We propose two novel algorithms that both use the iterate in (6). The first uses the ADAgrad preconditioner which we call ADAspectral. The second uses the RMSprop preconditioner which we call RMSspectral. 2.4 The #-Operator and Fast Approximations Letting X = Udiag(s)VT be the SVD of X, the #-operator for the Schatten-∞norm (also known as the spectral norm) can be computed as follows [1]: X# = ||s||1UVT . Depending on the cost of the gradient estimation, this computation may be relatively cheap [1] or quite expensive. In situations where the gradient estimate is relatively cheap, the exact #-operator demands significant overhead. Instead of calculating the full SVD, we utilize a randomize SVD algorithm [9, 22]. For N ≤M, this reduces the cost from O(MN 2) to O(MK2+MN log(k)) with k the number of projections used in the algorithm. Letting ˜Udiag(˜s) ˜VT ≃X represent the rank-k+ 1 approximate SVD, then the approximate #-operator corresponds to the low-rank approximation and the reweighted remainder, X# ≃||˜s||1( ˜U1:k ˜V1:k + ˜s−1 k+1(X −˜U1:Kdiag( ˜ s1:K) ˜ V1:K T )). We note that the #-operator is also defined for the ℓ∞norm, however, for notational clarity, we will denote this as x♭and leave the # notation for the Schatten-∞case. This x♭solution was given in [13, 1] as x♭= ||x||1×sign(x). Pseudocode for these operations is in the Supplementary Materials. 3 Applicability of Schatten-∞Bounds to Models 3.1 Restricted Boltzmann Machines (RBM) RBMs [26, 11] are bipartite Markov Random Field models that form probabilistic generative models over a collection of data. They are useful both as generative models and for “pre-training” deep networks [11, 8]. In the binary case, the observations are binary v ∈{0, 1}M with connections to latent (hidden) binary units, h ∈{0, 1}J. The probability for each state {v, h} is defined 4 by parameters θ = {W, c, b} with the energy −Eθ(v, h) ≜cT v + vT Wh + hT b and probability pθ(v, h) ∝−Eθ(v, h). The maximum likelihood estimator implies the objective function minθ F(θ) = −1 N log P h exp(−Eθ(vn, h)) + log P v P h exp(−Eθ(vn, h)). Algorithm 1 RMSspectral for RBMs Inputs: ϵ1,..., λ, α, Nb Parameters: θ = {W, b, c} History Terms : VW, vb, vc for i=1,... do Sample a minibatch of size Nb Estimate gradient (dW, db, dc) % Update matrix parameter VW = αVW + (1 −α)dW ⊙dW D1/2 W = p λ + √VW W = W −ϵi(dW ⊘D1/2 W )# ⊘D1/2 W % Update bias term b Vb = αVb + (1 −α)db ⊙db d1/2 b = p λ + √vb b = b −ϵi(db ⊘d1/2 b )♭⊘d1/2 b % Same for c end for This objective function is generally intractable, although an accurate but computationally intensive estimator is given via Annealed Importance Sampling (AIS) [21, 24]. The gradient can be comparatively quickly estimated by taking a small number of Gibbs sampling steps in a Monte Carlo Integration scheme (Contrastive Divergence) [12, 28]. Due to the noisy nature of the gradient estimation and the intractable objective function, second order methods and line search methods are inappropriate and SGD has traditionally been used [16]. [1] proposed an upper bound on perturbations to W of F({W + U, b, c}) ≤F({W, b, c}) + ⟨∇WF({W, b, c}), U⟩+ MJ 2 ||U||2 S∞ This majorization motivated the Stochastic Spectral Descent (SSD) algorithm, which uses the #operator in Section 2.4. In addition, bias parameters b and c were bound on the ℓ∞norm and use the ♭updates from Section 2.4 [1]. In their experiments, this method showed significantly improved performance over competing algorithm for mini-batches of 2J and CD-25 (number of Gibbs sweeps), where the computational cost of the #-operator is insignificant. This motivates using the preconditioned spectral descent methods, and we show our proposed RMSspectral method in Algorithm 1. When the RBM is used to “pre-train” deep models, CD-1 is typically used (1 Gibbs sweep). One such model is the Deep Belief Net, where parameters are effectively learned by repeatedly learning RBM models [11, 24]. In this case, the SVD operation adds significant overhead. Therefore, the fast approximation of Section 2.4 and the adaptive methods result in vast improvements. These enhancements naturally extend to the Deep Belief Net, and results are detailed in Section 4.1. 3.2 Supervised Feedforward Neural Nets Algorithm 2 RMSspectral for FNN Inputs: ϵ1,..., λ, α, Nb Parameters: θ = {W0, . . . , WL} History Terms : V0, . . . , VL for i=1,... do Sample a minibatch of size Nb Estimate gradient by backprop (dWℓ) for ℓ= 0, . . . , L do Vℓ= αVℓ+ (1 −α)dWℓ⊙dWℓ D 1 2 ℓ= p λ + √Vℓ Wℓ= Wℓ−ϵi(dWℓ⊘D 1 2 ℓ)#⊘D 1 2 ℓ end for end for Feedforward Neural Nets are widely used models for classification problems. We consider L layers of hidden variables with deterministic nonlinear link functions with a softmax classifier at the final layer. Ignoring bias terms for clarity, an input x is mapped through a linear transformation and a nonlinear link function η(·) to give the first layer of hidden nodes, α1 = η(W0x). This process continues with αℓ= η(Wℓ−1αℓ−1). At the last layer, we set h = WLαL and an J-dimensional class vector is drawn y ∼categorical(softmax(h)). The standard approach for parameter learning is to minimize the objective function that corresponds to the (penalized) maximum likelihood objective function over the parameters θ = {W0, . . . , WL} and data examples {x1, . . . , xN}, which is given by: θML = arg minθ f(θ) = 1 N PN n=1 −yT n hn,θ + log PJ j=1 exp(hn,θ,j) (7) While there have been numerous recent papers detailing different optimization approaches to this objective [7, 6, 5, 16, 19], we are unaware of any approaches that attempt to derive non-Euclidean bounds. As a result, we explore the properties of this objective function. We show the key results here and provide further details on the general framework in Supplemental Section A.2 and the specific derivation in Supplemental Section D. By using properties of the log-sum-exp function 5 Normalized time, thousands 0 50 100 150 200 Reconstruction Error 12 13 14 15 16 17 MNIST, CD-1 Training SGD ADAgrad RMSprop SSD-F ADAspectral RMSspectral SSD Normalized time, thousands 0 10 20 30 40 50 log p(v) -95 -90 -85 -80 MNIST, PCD-25 Training SGD ADAgrad RMSprop SSD ADAspectral RMSspectral Normalized time, thousands 0 10 20 30 40 50 log p(v) -120 -115 -110 -105 -100 -95 -90 Caltech-101, PCD-25 Training SGD ADAgrad RMSprop SSD ADAspectral RMSspectral Figure 2: A normalized time unit is 1 SGD iteration (Left) This shows the reconstruction error from training the MNIST dataset using CD-1 (Middle) Log-likelihood of training Caltech-101 Silhouettes using Persistent CD-25 (Right) Log-likelihood of training MNIST using Persistent CD-25 from [1, 2], the objective function from (7) has an upper bound, f(φ) ≤ f(θ) + ⟨∇θf(θ), φ −θ⟩+ 1 N PN n=1 ( 1 2 maxj(hn,φ,j −hn,θ,j)2 +2 max j |hn,φ,j −hn,θ,j −⟨∇θhn,θ,j, φ −θ⟩|). (8) We note that this implicitly requires the link function to have a Lipschitz continuous gradient. Many commonly used links, including logistic, hyperbolic tangent, and smoothed rectified linear units, have Lipschitz continuous gradients, but rectified linear units do not. In this case, we will just proceed with the subgradient. A strict upper bound on these parameters is highly pessimistic, so instead we propose to take a local approximation around the parameter Wℓin each layer individually. Considering a perturbation U around Wℓ, the terms in (8) have the following upper bounds: (hφ,j −hθ,j)2 ˜≤||U||2 S∞||αℓ||2 2 ||∇αℓ+1hj||2 2 maxx η′(x)2 , |hφ,j −hθ,j −⟨∇θhθ,j, φ −θ⟩| ˜≤1 2||U||2 S∞||αℓ||2 2||∇αℓ+1hj||∞||∇αℓhj||∞maxx |η′′(x)|. Where η′(x) = d dtη(t)|t=x and η′′(x) = d2 dt2 η(t)|t=x. Because both αℓand ∇αℓ+1hj can easily be calculated during the standard backpropagation procedure for gradient estimation, this can be calculated without significant overhead. Since these equations are bounded on the Schatten-∞norm, this motivates using the Stochastic Spectral Descent algorithm with the #-operator is applied to the weight matrix for each layer individually. However, the proposed updates require the calculation of many additional terms; as well, they are pessimistic and do not consider the inhomogenous curvature. Instead of attempting to derive the step-sizes, both RMSspectral and ADAspectral will learn appropriate element-wise step-sizes by using the gradient history. Then, the preconditioned #-operator is applied to the weights from each layer individually. The RMSspectral method for feed-forward neural nets is shown in Algorithm 2. It is unclear how to use non-Euclidean geometry for convolution layers [14], as the pooling and convolution create alternative geometries. However, the ADAspectral and RMSspectral algorithms can be applied to convolutional neural nets by using the non-Euclidean steps on the dense layers and linear updates from ADAgrad and RMSprop on the convolutional filters. The benefits from the dense layers then propagate down to the convolutional layers. 4 Experiments 4.1 Restricted Boltzmann Machines To show the use of the approximate #-operator from Section 2.4 as well as RMSspec and ADAspec, we first perform experiments on the MNIST dataset. The dataset was binarized as in [24]. We detail the algorithmic setting used in these experiments in Supplemental Table 1, which are chosen to match previous literature on the topic. The batch size was chosen to be 1000 data points, which matches [1]. This is larger than is typical in the RBM literature [24, 10], but we found that all algorithms improved their final results with larger batch-sizes due to reduction in sampling noise. 6 The analysis supporting the SSD algorithm does not directly apply to the CD-1 learning procedure, so it is of interest to examine how well it generalizes to this framework. To examine the effect of CD-1 learning, we used reconstruction error with J=500 hidden, latent variables. Reconstruction error is a standard heuristic for analyzing convergence [10], and is defined by taking ||v −ˆv||2, where v is an observation and ˆv is the mean value for a CD-1 pass from that sample. This result is shown in Figure 2 (left), with all algorithms normalized to the amount of time it takes for a single SGD iteration. The full #-operator in the SSD algorithm adds significant overhead to each iteration, so the SSD algorithm does not provide competitive performance in this situation. The SSD-F, ADAspectral, and RMSspectral algorithms use the approximate #-operator. Combining the adaptive nature of RMSprop with non-Euclidean optimization provides dramatically improved performance, seemingly converging faster and to a better optimum. High CD orders are necessary to fit the ML estimator of an RBM [24]. To this end, we use the Persistent CD method of [28] with 25 Gibbs sweeps per iteration. We show the log-likelihood of the training data as a function of time in Figure 2(middle). The log-likelihood is estimated using AIS with the parameters and code from [24]. There is a clear divide with improved performance from the Schatten-∞based methods. There is further improved performance by including preconditioners. As well as showing improved training, the test set has an improved log-likelihood of -85.94 for RMSspec and -86.04 for SSD. For further exploration, we trained a Deep Belief Net with two hidden layers of size 500-2000 to match [24]. We trained the first hidden layer with CD-1 and RMSspectral, and the second layer with PCD-25 and RMSspectral. We used the same model sizes, tuning parameters, and evaluation parameters and code from [24], so the only change is due to the optimization methods. Our estimated lower-bound on the performance of this model is -80.96 on the test set. This compares to -86.22 from [24] and -84.62 for a Deep Boltzmann Machine from [23]; however, we caution that these numbers no longer reflect true performance on the test set due to bias from AIS and repeated overfitting [23]. However, this is a fair comparison because we use the same settings and the evaluation code. For further evidence, we performed the same maximum-likelihood experiment on the Caltech-101 Silhouettes dataset [17]. This dataset was previously used to demonstrate the effectiveness of an adaptive gradient step-size and Enhanced Gradient method for Restricted Boltzmann Machines [3]. The training curves for the log-likelihood are shown in Figure 2 (right). Here, the methods based on the Schatten-∞norm give state-of-the-art results in under 1000 iterations, and thoroughly dominate the learning. Furthermore, both ADAspectral and RMSspectral saturate to a higher value on the training set and give improved testing performance. On the test set, the best result from the nonEuclidean methods gives a testing log-likelihood of -106.18 for RMSspectral, and a value of -109.01 for RMSprop. These values all improve over the best reported value from SGD of -114.75 [3]. 4.2 Standard and Convolutional Neural Networks Compared to RBMs and other popular machine learning models, standard feed-forward neural nets are cheap to train and evaluate. The following experiments show that even in this case where the computation of the gradient is efficient, our proposed algorithms produce a major speed up in convergence, in spite of the per-iteration cost associated with approximating the SVD of the gradient. We demonstrate this claim using the well-known MNIST and Cifar-10 [15] image datasets. Both datasets are similar in that they pose a classification task over 10 possible classes. However, CIFAR-10, consisting of 50K RGB images of vehicles and animals, with an additional 10K images reserved for testing, poses a considerably more difficult problem than MNIST, with its 60K greyscale images of hand-written digits, plus 10K test samples. This fact is indicated by the state-of-the-art accuracy on the MNIST test set reaching 99.79% [29], with the same architecture achieving “only” 90.59% accuracy on CIFAR-10. To obtain the state-of-the-art performance on these datasets, it is necessary to use various types of data pre-processing methods, regularization schemes and data augmentation, all of which have a big impact of model generalization [14]. In our experiments we only employ ZCA whitening on the CIFAR-10 data [15], since these methods are not the focus of this paper. Instead, we focus on the comparative performance of the various algorithms on a variety of models. We trained neural networks with zero, one and two hidden layers, with various hidden layer sizes, and with both logistic and rectified linear units (ReLU) non-linearities [20]. Algorithm parameters 7 Seconds 0 200 400 600 800 1000 log p(v) -10 0 -10 -1 -10 -2 -10 -3 MNIST, 2-Layer NN SGD ADAgrad RMSprop SSD ADAspectral RMSspectral Seconds 1000 2000 3000 4000 log p(v) -10 0 -10 -1 -10 -2 -10 -3 Cifar, 2-Layer CNN SGD ADAgrad RMSprop SSD ADAspectral RMSspectral Seconds #10 5 0 2 4 6 Accuracy 0 0.2 0.4 0.6 0.8 1 Cifar-10, 5-Layer CNN RMSprop RMSspectral Figure 3: (Left) Log-likelihood of current training batch on the MNIST dataset (Middle) Loglikelihood of the current training batch on CIFAR-10 (Right) Accuracy on the CIFAR-10 test set can be found in Supplemental Table 2. We observed fairly consistent performance across the various configurations, with spectral methods yielding greatly improved performance over their Euclidean counterparts. Figure 3 shows convergence curves in terms of log-likelihood on the training data as learning proceeds. For both MNIST and CIFAR-10, SSD with estimated Lipschitz steps outperforms SGD. Also clearly visible is the big impact of using local preconditioning to fit the local geometry of the objective, amplified by using the spectral methods. Spectral methods also improve convergence of convolutional neural nets (CNN). In this setting, we apply the #-operator only to fully connected linear layers. Preconditioning is performed for all layers, i.e., when using RMSspectral for linear layers, the convolutional layers are updated via RMSprop. We applied our algorithms to CNNs with one, two and three convolutional layers, followed by two fully-connected layers. Each convolutional layer was followed by max pooling and a ReLU non-linearity. We used 5 × 5 filters, ranging from 32 to 64 filters per layer. We evaluated the MNIST test set using a two-layer convolutional net with 64 kernels. The best generalization performance on the test set after 100 epochs was achieved by both RMSprop and RMSspectral, with an accuracy of 99.15%. RMSspectral obtained this level of accuracy after only 40 epochs, less that half of what RMSprop required. To further demonstarte the speed up, we trained on CIFAR-10 using a deeper net with three convolutional layers, following the architecture used in [29]. In Figure 3 (Right) the test set accuracy is shown as training proceeds with both RMSprop and RMSspectral. While they eventually achieve similar accuracy rates, RMSspectral reaches that rate four times faster. 5 Discussion In this paper we have demonstrated that many deep models naturally operate with non-Euclidean geometry, and exploiting this gives remarkable improvements in training efficiency, as well as finding improved local optima. Also, by using adaptive methods, algorithms can use the same tuning parameters across different model sizes configurations. We find that in the RBM and DBN, improving the optimization can give dramatic performance improvements on both the training and the test set. For feedforward neural nets, the training efficiency of the propose methods give staggering improvements to the training performance. While the training performance is drastically better via the non-Euclidean quasi-Newton methods, the performance on the test set is improved for RBMs and DBNs, but not in feedforward neural networks. However, because our proposed algorithms fit the model significantly faster, they can help improve Bayesian optimization schemes [27] to learn appropriate penalization strategies and model configurations. Furthermore, these methods can be adapted to dropout [14] and other recently proposed regularization schemes to help achieve state-of-the-art performance. Acknowledgements The research reported here was funded in part by ARO, DARPA, DOE, NGA and ONR, and in part by the European Commission under grants MIRG-268398 and ERC Future Proof, by the Swiss Science Foundation under grants SNF 200021-146750, SNF CRSII2-147633, and the NCCR Marvel. We thank the reviewers for their helpful comments. 8 References [1] D. Carlson, V. Cevher, and L. Carin. Stochastic Spectral Descent for Restricted Boltzmann Machines. AISTATS, 2015. [2] D. Carlson, Y.-P. Hsieh, E. Collins, L. Carin, and V. Cevher. Stochastic Spectral Descent for Discrete Graphical Models. IEEE J. Special Topics in Signal Processing, 2016. [3] K. Cho, T. Raiko, and A. Ilin. Enhanced Gradient for Training Restricted Boltzmann Machines. Neural Computation, 2013. [4] A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The Loss Surfaces of Multilayer Networks. AISTATS 2015. [5] Y. N. Dauphin, H. de Vries, J. Chung, and Y. Bengio. RMSProp and equilibrated adaptive learning rates for non-convex optimization. arXiv:1502.04390 2015. [6] Y. N. Dauphin, R. Pascanu, C. Gulcehre, K. Cho, S. Ganguli, and Y. Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In NIPS, 2014. [7] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 2010. [8] D. Erhan, Y. Bengio, A. Courville, P.-A. Manzagol, P. Vincent, and S. Bengio. Why Does Unsupervised Pre-training Help Deep Learning? JMLR 2010. [9] N. Halko, P. G. Martinsson, and J. A. Tropp. Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions. SIAM Review 2011. [10] G. Hinton. A Practical Guide to Training Restricted Boltzmann Machines. U. Toronto Technical Report, 2010. [11] G. E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural Computation, 2006. [12] G. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 2002. [13] J. A. Kelner, Y. T. Lee, L. Orecchia, and A. Sidford. An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations 2013. [14] A. Krizhevsky and G. E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. NIPS, 2012. [15] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. University of Toronto, Tech. Rep, 2009. [16] Q. V. Le, A. Coates, B. Prochnow, and A. Y. Ng. On Optimization Methods for Deep Learning. ICML, 2011. [17] B. Marlin and K. Swersky. Inductive principles for restricted Boltzmann machine learning. ICML, 2010. [18] J. Martens and R. Grosse. Optimizing Neural Networks with Kronecker-factored Approximate Curvature. arXiv:1503.05671 2015. [19] J. Martens and I. Sutskever. Parallelizable Sampling of Markov Random Fields. AISTATS, 2010. [20] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, 2010. [21] R. M. Neal. Annealed Importance Sampling. U. Toronto Technical Report, 1998. [22] V. Rokhlin, A. Szlam, and M. Tygert. A Randomized Algorithm for Principal Component Analysis. SIAM Journal on Matrix Analysis and Applications 2010. [23] R. Salakhutdinov and G. Hinton. Deep Boltzmann Machines. AISTATS, 2009. [24] R. Salakhutdinov and I. Murray. On the Quantitative Analysis of Deep Belief Networks. ICML, 2008. [25] T. Schaul, S. Zhang, and Y. LeCun. No More Pesky Learning Rates. arXiv 1206.1106 2012. [26] P. Smolensky. Information Processing in Dynamical Systems: Foundations of Harmony Theory, 1986. [27] J. Snoek, H. Larochelle, and R. P. Adams. Practical Bayesian Optimization of Machine Learning Algorithms. In NIPS, 2012. [28] T. Tieleman and G. Hinton. Using fast weights to improve persistent contrastive divergence. ICML, 2009. [29] L. Wan, M. Zeiler, S. Zhang, Y. L. Cun, and R. Fergus. Regularization of neural networks using dropconnect. In ICML, 2013. [30] M. D. Zeiler. ADADELTA: An Adaptive Learning Rate Method. arXiv 1212.5701 2012. 9 | 2015 | 388 |
5,911 | Accelerated Mirror Descent in Continuous and Discrete Time Walid Krichene UC Berkeley walid@eecs.berkeley.edu Alexandre M. Bayen UC Berkeley bayen@berkeley.edu Peter L. Bartlett UC Berkeley and QUT bartlett@berkeley.edu Abstract We study accelerated mirror descent dynamics in continuous and discrete time. Combining the original continuous-time motivation of mirror descent with a recent ODE interpretation of Nesterov’s accelerated method, we propose a family of continuous-time descent dynamics for convex functions with Lipschitz gradients, such that the solution trajectories converge to the optimum at a O(1/t2) rate. We then show that a large family of first-order accelerated methods can be obtained as a discretization of the ODE, and these methods converge at a O(1/k2) rate. This connection between accelerated mirror descent and the ODE provides an intuitive approach to the design and analysis of accelerated first-order algorithms. 1 Introduction We consider a convex optimization problem, minimizex∈X f(x), where X ⊆Rn is convex and closed, f is a C1 convex function, and ∇f is assumed to be Lf-Lipschitz. Let f ⋆be the minimum of f on X. Many convex optimization methods can be interpreted as the discretization of an ordinary differential equation, the solutions of which are guaranteed to converge to the set of minimizers. Perhaps the simplest such method is gradient descent, given by the iteration x(k+1) = x(k)−s∇f(x(k)) for some step size s, which can be interpreted as the discretization of the ODE ˙X(t) = −∇f(X(t)), with discretization step s. The well-established theory of ordinary differential equations can provide guidance in the design and analysis of optimization algorithms, and has been used for unconstrained optimization [8, 7, 13], constrained optimization [27] and stochastic optimization [25]. In particular, proving convergence of the solution trajectories of an ODE can often be achieved using simple and elegant Lyapunov arguments. The ODE can then be carefully discretized to obtain an optimization algorithm for which the convergence rate can be analyzed by using an analogous Lyapunov argument in discrete time. In this article, we focus on two families of first-order methods: Nesterov’s accelerated method [22], and Nemirovski’s mirror descent method [19]. First-order methods have become increasingly important for large-scale optimization problems that arise in machine learning applications. Nesterov’s accelerated method [22] has been applied to many problems and extended in a number of ways, see for example [23, 20, 21, 4]. The mirror descent method also provides an important generalization of the gradient descent method to non-Euclidean geometries, as discussed in [19, 3], and has many applications in convex optimization [6, 5, 12, 15], as well as online learning [9, 11]. An intuitive understanding of these methods is of particular importance for the design and analysis of new algorithms. Although Nesterov’s method has been notoriously hard to explain intuitively [14], progress has been made recently: in [28], Su et al. give an ODE interpretation of Nesterov’s method. However, this interpretation is restricted to the original method [22], and does not apply to its extensions to non-Euclidean geometries. In [1], Allen-Zhu and Orecchia give another interpretation of Nesterov’s method, as performing, at each iteration, a convex combination of a mirror step and a gradient step. Although it covers a broader family of algorithms (including non-Euclidean geometries), this interpretation still requires an involved analysis, and lacks the simplicity and elegance of 1 ODEs. We provide a new interpretation which has the benefits of both approaches: we show that a broad family of accelerated methods (which includes those studied in [28] and [1]) can be obtained as a discretization of a simple ODE, which converges at a O(1/t2) rate. This provides a unified interpretation, which could potentially simplify the design and analysis of first-order accelerated methods. The continuous-time interpretation [28] of Nesterov’s method and the continuous-time motivation of mirror descent [19] both rely on a Lyapunov argument. They are reviewed in Section 2. By combining these ideas, we propose, in Section 3, a candidate Lyapunov function V (X(t), Z(t), t) that depends on two state variables: X(t), which evolves in the primal space E = Rn, and Z(t), which evolves in the dual space E∗, and we design coupled dynamics of (X, Z) to guarantee that d dtV (X(t), Z(t), t) ≤0. Such a function is said to be a Lyapunov function, in reference to [18]; see also [16]. This leads to a new family of ODE systems, given in Equation (5). We prove the existence and uniqueness of the solution to (5) in Theorem 1. Then we prove in Thereom 2, using the Lyapunov function V , that the solution trajectories are such that f(X(t)) −f ⋆= O(1/t2). In Section 4, we give a discretization of these continuous-time dynamics, and obtain a family of accelerated mirror descent methods, for which we prove the same O(1/k2) convergence rate (Theorem 3) using a Lyapunov argument analogous to (though more involved than) the continuous-time case. We give, as an example, a new accelerated method on the simplex, which can be viewed as performing, at each step, a convex combination of two entropic projections with different step sizes. This ODE interpretation of accelerated mirror descent gives new insights and allows us to extend recent results such as the adaptive restarting heuristics proposed by O’Donoghue and Cand`es in [24], which are known to empirically improve the convergence rate. We test these methods on numerical examples in Section 5 and comment on their performance. 2 ODE interpretations of Nemirovski’s mirror descent method and Nesterov’s accelerated method Proving convergence of the solution trajectories of an ODE often involves a Lyapunov argument. For example, to prove convergence of the solutions to the gradient descent ODE ˙X(t) = −∇f(X(t)), consider the Lyapunov function V (X(t)) = 1 2∥X(t) −x⋆∥2 for some minimizer x⋆. Then the time derivative of V (X(t)) is given by d dtV (X(t)) = D ˙X(t), X(t) −x⋆E = ⟨−∇f(X(t)), X(t) −x⋆⟩≤−(f(X(t)) −f ⋆), where the last inequality is by convexity of f. Integrating, we have V (X(t)) −V (x0) ≤tf ⋆− R t 0 f(X(τ))dτ, thus by Jensen’s inequality, f 1 t R t 0 X(τ)dτ −f ⋆≤ 1 t R t 0 f(X(τ))dτ −f ⋆≤ V (x0) t , which proves that f 1 t R t 0 X(τ)dτ converges to f ⋆at a O(1/t) rate. 2.1 Mirror descent ODE The previous argument was extended by Nemirovski and Yudin in [19] to a family of methods called mirror descent. The idea is to start from a non-negative function V , then to design dynamics for which V is a Lyapunov function. Nemirovski and Yudin argue that one can replace the Lyapunov function V (X(t)) = 1 2∥X(t) −x⋆∥2 by a function on the dual space, V (Z(t)) = Dψ∗(Z(t), z⋆), where Z(t) ∈E∗is a dual variable for which we will design the dynamics (z⋆is the value of Z at equilibrium), and the corresponding trajectory in the primal space is X(t) = ∇ψ∗(Z(t)). Here ψ∗is a convex function defined on E∗, such that ∇ψ∗maps E∗to X, and Dψ∗(Z(t), z⋆) is the Bregman divergence associated with ψ∗, defined as Dψ∗(z, y) = ψ∗(z) −ψ∗(y) −⟨∇ψ∗(y), z −y⟩. The function ψ∗is said to be ℓ-strongly convex w.r.t. a reference norm ∥· ∥∗if D∗ ψ(z, y) ≥ℓ 2∥z −y∥2 ∗ for all y, z, and it is said to be L-smooth w.r.t. ∥· ∥∗if Dψ∗(z, y) ≤L 2 ∥z −y∥2 ∗. For a review of properties of Bregman divergences, see Chapter 11.2 in [11], or Appendix A in [2]. By definition of the Bregman divergence, we have d dtV (Z(t)) = d dtDψ∗(Z(t), z⋆) = d dt (ψ∗(Z(t)) −ψ∗(z⋆) −⟨∇ψ∗(z∗), Z(t) −z⋆⟩) = D ∇ψ∗(Z(t)) −∇ψ∗(z⋆), ˙Z(t) E = D X(t) −x⋆, ˙Z(t) E . 2 Therefore, if the dual variable Z obeys the dynamics ˙Z = −∇f(X), then d dtV (Z(t)) = −⟨∇f(X(t)), X(t) −x⋆⟩≤−(f(X(t)) −f ⋆) and by the same argument as in the gradient descent ODE, V is a Lyapunov function and f 1 t R t 0 X(τ)dτ −f ⋆converges to 0 at a O(1/t) rate. The mirror descent ODE system can be summarized by X = ∇ψ∗(Z) ˙Z = −∇f(X) X(0) = x0, Z(0) = z0 with ∇ψ∗(z0) = x0 (1) Note that since ∇ψ∗maps into X, X(t) = ∇ψ∗(Z(t)) remains in X. Finally, the unconstrained gradient descent ODE can be obtained as a special case of the mirror descent ODE (1) by taking ψ∗(z) = 1 2∥z∥2, for which ∇ψ∗is the identity, in which case X and Z coincide. 2.2 ODE interpretation of Nesterov’s accelerated method In [28], Su et al. show that Nesterov’s accelerated method [22] can be interpreted as a discretization of a second-order differential equation, given by ( ¨ X + r+1 t ˙X + ∇f(X) = 0 X(0) = x0, ˙X(0) = 0 (2) The argument uses the following Lyapunov function (up to reparameterization), E(t) = t2 r (f(X) − f ⋆) + r 2∥X + t r ˙X −x⋆∥2, which is proved to be a Lyapunov function for the ODE (2) whenever r ≥2. Since E is decreasing along trajectories of the system, it follows that for all t > 0, E(t) ≤ E(0) = r 2∥x0 −x⋆∥2, therefore f(X(t)) −f ⋆≤ r t2 E(t) ≤ r t2 E(0) ≤r2 t2 ∥x0−x⋆∥2 2 , which proves that f(X(t)) converges to f ⋆at a O(1/t2) rate. One should note in particular that the squared Euclidean norm is used in the definition of E(t) and, as a consequence, discretizing the ODE (2) leads to a family of unconstrained, Euclidean accelerated methods. In the next section, we show that by combining this argument with Nemirovski’s idea of using a general Bregman divergence as a Lyapunov function, we can construct a much more general family of ODE systems which have the same O(1/t2) convergence guarantee. And by discretizing the resulting dynamics, we obtain a general family of accelerated methods that are not restricted to the unconstrained Euclidean geometry. 3 Continuous-time Accelerated Mirror Descent 3.1 Derivation of the accelerated mirror descent ODE We consider a pair of dual convex functions, ψ defined on X and ψ∗defined on E∗, such that ∇ψ∗: E∗→X. We assume that ψ∗is Lψ∗-smooth with respect to ∥· ∥∗, a reference norm on the dual space. Consider the function V (X(t), Z(t), t) = t2 r (f(X(t)) −f ⋆) + rDψ∗(Z(t), z⋆) (3) where Z is a dual variable for which we will design the dynamics, and z⋆is its value at equilibrium. Taking the time-derivative of V , we have d dtV (X(t), Z(t), t) = 2t r (f(X) −f ⋆) + t2 r D ∇f(X), ˙X E + r D ˙Z, ∇ψ∗(Z) −∇ψ∗(z⋆) E . Assume that ˙Z = −t r∇f(X). Then, the time-derivative of V becomes d dtV (X(t), Z(t), t) = 2t r (f(X) −f ⋆) −t ∇f(X), −t r ˙X + ∇ψ∗(Z) −∇ψ∗(z⋆) . Therefore, if Z is such that ∇ψ∗(Z) = X + t r ˙X, and ∇ψ∗(z⋆) = x⋆, then, d dtV (X(t), Z(t), t) = 2t r (f(X) −f ⋆) −t ⟨∇f(X), X −x⋆⟩≤2t r (f(X) −f ⋆) −t(f(X) −f ⋆) ≤−tr −2 r (f(X) −f ⋆) (4) 3 and it follows that V is a Lyapunov function whenever r ≥2. The proposed ODE system is then ˙X = r t (∇ψ∗(Z) −X), ˙Z = −t r ∇f(X), X(0) = x0, Z(0) = z0, with ∇ψ∗(z0) = x0. (5) In the unconstrained Euclidean case, taking ψ∗(z) = 1 2∥z∥2, we have ∇ψ∗(z) = z, thus Z = X + t r ˙X, and the ODE system is equivalent to d dt X + t r ˙X = −t r∇f(X), which is equivalent to the ODE (2) studied in [28], which we recover as a special case. We also give another interpretation of ODE (5): the first equation is equivalent to tr ˙X + rtr−1X = rtr−1∇ψ∗(Z), or, in integral form, trX(t) = r R t 0 τ r−1∇ψ∗(Z(τ))dτ, which can be written as X(t) = R t 0 w(τ)∇ψ∗(Z(τ))dτ R t 0 w(τ)dτ , with w(τ) = τ r−1. Therefore the coupled dynamics of (X, Z) can be interpreted as follows: the dual variable Z accumulates gradients with a t r rate, while the primal variable X is a weighted average of ∇ψ∗(Z(τ)) (the “mirrored” dual trajectory), with weights proportional to τ r−1. This also gives an interpretation of r as a parameter controlling the weight distribution. It is also interesting to observe that the weights are increasing if and only if r ≥2. Finally, with this averaging interpretation, it becomes clear that the primal trajectory X(t) remains in X, since ∇ψ∗maps into X and X is convex. 3.2 Solution of the proposed dynamics First, we prove existence and uniqueness of a solution to the ODE system (5), defined for all t > 0. By assumption, ψ∗is Lψ∗-smooth w.r.t. ∥· ∥∗, which is equivalent (see e.g. [26]) to ∇ψ∗is Lψ∗-Lipschitz. Unfortunately, due to the r t term in the expression of ˙X, the function (X, Z, t) 7→ ( ˙X, ˙Z) is not Lipschitz at t = 0, and we cannot directly apply the Cauchy-Lipschitz existence and uniqueness theorem. However, one can work around it by considering a sequence of approximating ODEs, similarly to the argument used in [28]. Theorem 1. Suppose f is C1, and that ∇f is Lf-Lipschitz, and let (x0, z0) ∈X × E∗such that ∇ψ∗(z0) = x0. Then the accelerated mirror descent ODE system (5) with initial condition (x0, z0) has a unique solution (X, Z), in C1([0, ∞), Rn). We will show existence of a solution on any given interval [0, T] (uniqueness is proved in the supplementary material). Let δ > 0, and consider the smoothed ODE system ˙X = r max(t,δ)(∇ψ∗(Z) −X), ˙Z = −t r ∇f(X), X(0) = x0, Z(0) = z0 with ∇ψ∗(z0) = x0. (6) Since the functions (X, Z) 7→−t r∇f(X) and (X, Z) 7→ r max(t,δ)(∇ψ∗(Z) −X) are Lipschitz for all t ∈[0, T], by the Cauchy-Lipschitz theorem (Theorem 2.5 in [29]), the system (6) has a unique solution (Xδ, Zδ) in C1([0, T]). In order to show the existence of a solution to the original ODE, we use the following Lemma (proved in the supplementary material). Lemma 1. Let t0 = 2 √ Lf Lψ∗. Then the family of solutions (Xδ, Zδ)|[0,t0] δ≤t0 is equi-Lipschitzcontinuous and uniformly bounded. Proof of existence. Consider the family of solutions (Xδi, Zδi), δi = t02−i i∈N restricted to [0, t0]. By Lemma 1, this family is equi-Lipschitz-continuous and uniformly bounded, thus by the Arzel`aAscoli theorem, there exists a subsequence ((Xδi, Zδi))i∈I that converges uniformly on [0, t0] (where I ⊂N is an infinite set of indices). Let ( ¯X, ¯Z) be its limit. Then we prove that ( ¯X, ¯Z) is a solution to the original ODE (5) on [0, t0]. First, since for all i ∈ I, Xδi(0) = x0 and Zδi(0) = z0, it follows that ¯X(0) = limi→∞,i∈I Xδi(0) = x0 and ¯Z(0) = limi→∞,i∈I Zδi(0) = z0, thus ( ¯X, ¯Z) satisfies the initial conditions. Next, let t1 ∈(0, t0), and let ( ˜X, ˜Z) be the solution of the ODE (5) on t ≥t1, with initial condition ( ¯X(t1), ¯Z(t1)). Since (Xδi(t1), Zδi(t1))i∈I →( ¯X(t1), ¯Z(t1)) as i →∞, then by 4 continuity of the solution w.r.t. initial conditions (Theorem 2.8 in [29]), we have that for some ϵ > 0, Xδi →˜X uniformly on [t1, t1 + ϵ). But we also have Xδi →¯X uniformly on [0, t0], therefore ¯X and ˜X coincide on [t1, t1 +ϵ), therefore ¯X satisfies the ODE on [t1, t1 +ϵ). And since t1 is arbitrary in (0, t0), this concludes the proof of existence. 3.3 Convergence rate It is now straightforward to establish the convergence rate of the solution. Theorem 2. Suppose that f has Lipschitz gradient, and that ψ∗is a smooth distance generating function. Let (X(t), Z(t)) be the solution to the accelerated mirror descent ODE (5) with r ≥2. Then for all t > 0, f(X(t)) −f ⋆≤r2Dψ∗(z0,z⋆) t2 . Proof. By construction of the ODE, V (X(t), Z(t), t) = t2 r (f(X(t)) −f ⋆) + rDψ∗(Z(t), z⋆) is a Lyapunov function. It follows that for all t > 0, t2 r (f(X(t)) −f ⋆) ≤V (X(t), Z(t), t) ≤ V (x0, z0, 0) = rDψ∗(z0, z⋆). 4 Discretization Next, we show that with a careful discretization of this continuous-time dynamics, we can obtain a general family of accelerated mirror descent methods for constrained optimization. Using a mixed forward/backward Euler scheme (see e.g. Chapter 2 in [10]), we can discretize the ODE system (5) using a step size √s as follows. Given a solution (X, Z) of the ODE (5), let tk = k√s, and x(k) = X(tk) = X(k√s). Approximating ˙X(tk) with X(tk+√s)−X(tk) √s , we propose the discretization ( x(k+1)−x(k) √s = r k√s ∇ψ∗(z(k)) −x(k+1) , z(k+1)−z(k) √s + k√s r ∇f(x(k+1)) = 0. (7) The first equation can be rewritten as x(k+1) = x(k) + r k∇ψ∗(z(k)) / 1 + r k (note the independence on s, due to the time-scale invariance of the first ODE). In other words, x(k+1) is a convex combination of ∇ψ∗(z(k)) and x(k) with coefficients λk = r r+k and 1 −λk = k r+k. To summarize, our first discrete scheme can be written as ( x(k+1) = λk∇ψ∗(z(k)) + (1 −λk)x(k), λk = r r+k, z(k+1) = z(k) −ks r ∇f(x(k+1)). (8) Since ∇ψ∗maps into the feasible set X, starting from x(0) ∈X guarantees that x(k) remains in X for all k (by convexity of X). Note that by duality, we have ∇ψ∗(x∗) = arg maxx∈X ⟨x, x∗⟩−ψ(x), and if we additionally assume that ψ is differentiable on the image of ∇ψ∗, then ∇ψ = (∇ψ∗)−1 (Theorem 23.5 in [26]), thus if we write ˜z(k) = ∇ψ∗(z(k)), the second equation can be written as ˜z(k+1) = ∇ψ∗(∇ψ(˜z(k)) −ks r ∇f(x(k+1))) = arg min x∈X ψ(x) − ∇ψ(˜z(k)) −ks r ∇f(x(k+1)), x = arg min x∈X ks r D ∇f(x(k+1)), x E + Dψ(x, ˜z(k)). We will eventually modify this scheme in order to be able to prove the desired O(1/k2) convergence rate. However, we start by analyzing this version. Motivated by the continuous-time Lyapunov function (3), and using the correspondence t ≈k√s, we consider the potential function E(k) = V (x(k), z(k), k√s) = k2s r (f(x(k)) −f ⋆) + rDψ∗(z(k), z⋆). Then we have E(k+1) −E(k) = (k + 1)2s r (f(x(k+1)) −f ⋆) −k2s r (f(x(k)) −f ⋆) + r(Dψ∗(z(k+1), z⋆) −Dψ∗(z(k), z⋆)) = k2s r (f(x(k+1)) −f(x(k))) + s(1 + 2k) r (f(x(k+1)) −f ⋆) + r(Dψ∗(z(k+1), z⋆) −Dψ∗(z(k), z⋆)). 5 And through simple algebraic manipulation, the last term can be bounded as follows Dψ∗(z(k+1), z⋆) −Dψ∗(z(k), z⋆) = Dψ∗(z(k+1), z(k)) + D ∇ψ∗(z(k)) −∇ψ∗(z⋆), z(k+1) −z(k)E by definition of the Bregman divergence = Dψ∗(z(k+1), z(k)) + k r (x(k+1) −x(k)) + x(k+1) −x⋆, −ks r ∇f(x(k+1)) by the discretization (8) ≤Dψ∗(z(k+1), z(k)) + k2s r2 (f(x(k)) −f(x(k+1))) + ks r (f ⋆−f(x(k+1))). by convexity of f Therefore we have E(k+1) −E(k) ≤−s[(r−2)k−1] r (f(x(k+1)) −f ⋆) + rDψ∗(z(k+1), z(k)). Comparing this expression with the expression (4) of d dtV (X(t), Z(t), t) in the continuous-time case, we see that we obtain an analogous expression, except for the additional Bregman divergence term rDψ∗(z(k+1), z(k)), and we cannot immediately conclude that V is a Lyapunov function. This can be remedied by the following modification of the discretization scheme. 4.1 A family of discrete-time accelerated mirror descent methods In the expression (8) of x(k+1) = λk˜z(k) + (1 −λk)x(k), we propose to replace x(k) with ˜x(k), obtained as a solution to a minimization problem ˜x(k) = arg minx∈X γs ∇f(x(k)), x + R(x, x(k)), where R is regularization function that satisfies the following assumptions: there exist 0 < ℓR ≤LR such that for all x, x′ ∈X, ℓR 2 ∥x −x′∥2 ≤R(x, x′) ≤LR 2 ∥x −x′∥2. In the Euclidean case, one can take R(x, x′) = ∥x−x′∥2 2 , in which case ℓR = LR = 1 and the ˜x update becomes a prox-update. In the general case, one can take R(x, x′) = Dφ(x, x′) for some distance generating function φ which is ℓR-strongly convex and LR-smooth, in which case the ˜x update becomes a mirror update. The resulting method is summarized in Algorithm 1. This algorithm is a generalization of Allen-Zhu and Orecchia’s interpretation of Nesterov’s method in [1], where x(k+1) is a convex combination of a mirror descent update and a gradient descent update. Algorithm 1 Accelerated mirror descent with distance generating function ψ∗, regularizer R, step size s, and parameter r ≥3 1: Initialize ˜x(0) = x0, ˜z(0) = x0, or z(0) ∈(∇ψ)−1(x0) . 2: for k ∈N do 3: x(k+1) = λk˜z(k) + (1 −λk)˜x(k), with λk = r r+k. 4: ˜z(k+1) = arg min˜z∈X ks r D ∇f(x(k+1)), ˜z E + Dψ(˜z, ˜z(k)). If ψ is non-differentiable, z(k+1) = z(k) −kr s ∇f(x(k+1)) and ˜z(k+1) = ∇ψ∗(z(k+1)). 5: ˜x(k+1) = arg min˜x∈X γs D ∇f(x(k+1)), ˜x E + R(˜x, x(k+1)) 4.2 Consistency of the modified scheme One can show that given our assumptions on R, ˜x(k) = x(k) + O(s). Indeed, we have ℓR 2 ∥˜x(k) −x(k)∥2 ≤R(˜x(k), x(k)) ≤R(x(k), x(k)) + γs D ∇f(x(k)), x(k) −˜x(k)E ≤γs∥∇f(x(k))∥∗∥˜x(k) −x(k)∥ therefore ∥˜x(k) −x(k)∥≤s 2γ∥∇f(x(k))∥∗ ℓR , which proves the claim. Using this observation, we can show that the modified discretization scheme is consistent with the original ODE (5), that is, the difference equations defining x(k) and z(k) converge, as s tends to 0, to the ordinary differential equations of the continuous-time system (5). The difference equations of Algorithm 1 are equivalent to (7) in which x(k) is replaced by ˜x(k), i.e. ( x(k+1)−˜x(k) √s = r k√s(∇ψ∗(z(k)) −x(k+1)) z(k+1)−z(k) √s = −k√s r ∇f(x(k+1)) 6 Now suppose there exist C1 functions (X, Z), defined on R+, such that X(tk) ≈x(k) and Z(tk) ≈z(k) for tk = k√s. Then, using the fact that ˜x(k) = x(k) + O(s), we have x(k+1)−˜x(k) √s = x(k+1)−x(k) √s + O(√s) ≈ X(tk+√s)−X(tk) √s + O(√s) = ˙X(tk) + o(1), and similarly, z(k+1)−z(k) √s ≈˙Z(tk) + o(1), therefore the difference equation system can be written as ( ˙X(tk) + o(1) = r tk (∇ψ∗(Z(tk)) −X(tk + √s)) ˙Z(tk) + o(1) = −tk r ∇f(X(tk + √s)) which converges to the ODE (5) as s →0. 4.3 Convergence rate To prove convergence of the algorithm, consider the modified potential function ˜E(k) = V (˜x(k), z(k), k√s) = k2s r (f(˜x(k)) −f ⋆) + rDψ∗(z(k), z⋆). Lemma 2. If γ ≥LRLψ∗and s ≤ ℓR 2Lf γ , then for all k ≥0, ˜E(k+1) −˜E(k) ≤(2k + 1 −kr)s r (f(˜x(k+1)) −f ⋆). As a consequence, if r ≥3, ˜E is a Lyapunov function for k ≥1. This lemma is proved in the supplementary material. Theorem 3. The discrete-time accelerated mirror descent Algorithm 1 with parameter r ≥3 and step sizes γ ≥LRLψ∗, s ≤ ℓR 2Lf γ , guarantees that for all k > 0, f(˜x(k))) −f ⋆≤ r sk2 ˜E(1) ≤r2Dψ∗(z0, z⋆) sk2 + f(x0) −f ⋆ k2 . Proof. The first inequality follows immediately from Lemma 2. The second inequality follows from a simple bound on ˜E(1), proved in the supplementary material. 4.4 Example: accelerated entropic descent We give an instance of Algorithm 1 for simplex-constrained problems. Suppose that X = ∆n = {x ∈Rn + : Pn i=1 xi = 1} is the n-simplex. Taking ψ to be the negative entropy on ∆, we have for x ∈X, z ∈E∗, ψ(x) = n X i=1 xi ln xi+δ(x|∆), ψ∗(z) = ln n X i=1 ezi ! , ∂ψ(x) = (1 + ln xi)i+Ru, ∇ψ∗(z)i = ezi Pn j=1 ezj . where δ(·|∆) is the indicator function of the simplex (δ(x|∆) = 0 if x ∈∆and +∞otherwise), and u ∈Rn is a normal vector to the affine hull of the simplex. The resulting mirror descent update is a simple entropy projection and can be computed exactly in O(n) operations, and ψ∗ can be shown to be 1-smooth w.r.t. ∥· ∥∞, see for example [3, 6]. For the second update, we take R(x, y) = Dφ(x, y) where φ is a smoothed negative entropy function defined as follows: let ϵ > 0, and let φ(x) = ϵ Pn i=1(xi + ϵ) ln(xi + ϵ) + δ(x|∆). Although no simple, closed-form expression is known for ∇φ∗, it can be computed efficiently, in O(n log n) time using a deterministic algorithm, or O(n) expected time using a randomized algorithm, see [17]. Additionally, φ satisfies our assumptions: it is ϵ 1+nϵ-strongly convex and 1-smooth w.r.t. ∥· ∥∞. The resulting accelerated mirror descent method on the simplex can then be implemented efficiently, and by Theorem 3 it is guaranteed to converge in O(1/k2) whenever γ ≥1 and s ≤ ϵ 2(1+nϵ)Lf γ . 5 Numerical Experiments We test the accelerated mirror descent method in Algorithm 1, on simplex-constrained problems in Rn, n = 100, with two different objective functions: a simple quadratic f(x) = ⟨x −x⋆, Q(x −x⋆)⟩, for a random positive semi-definite matrix Q, and a log-sum-exp function 7 100 200 300 400 500 600 700 10−17 10−13 10−9 10−5 10−1 k f(x(k)) −f ⋆ Mirror descent Accelerated mirror descent Speed restart Gradient restart (a) Weakly convex quadratic, rank 10 100 200 300 400 500 600 10−14 10−11 10−8 10−5 10−2 k f(x(k)) −f ⋆ Mirror descent Accelerated mirror descent Speed restart Gradient restart (b) Log-sum-exp 100 200 300 400 500 600 700 800 10−17 10−14 10−11 10−8 10−5 10−2 k f(x(k)) −f ⋆ r = 3 r = 10 r = 30 r = 90 (c) Effect of the parameter r. Figure 1: Evolution of f(x(k)) −f ⋆on simplex-constrained problems, using different accelerated mirror descent methods with entropy distance generating functions. Algorithm 2 Accelerated mirror descent with restart 1: Initialize l = 0, ˜x(0) = ˜z(0) = x0. 2: for k ∈N do 3: x(k+1) = λl˜z(k) + (1 −λl)˜x(k), with λl = r r+l 4: ˜z(k+1) = arg min˜z∈X ks r D ∇f(x(k+1)), ˜z E + Dψ(˜z, ˜z(k)) 5: ˜x(k+1) = arg min˜x∈X γs D ∇f(x(k+1)), ˜x E + R(˜x, x(k+1)) 6: l ←l + 1 7: if Restart condition then 8: ˜z(k+1) ←x(k+1), l ←0 given by f(x) = ln PI i=1 ⟨ai, x⟩+ bi , where each entry in ai ∈Rn and bi ∈R is iid normal. We implement the accelerated entropic descent algorithm proposed in Section 4.4, and include the (non-accelerated) entropic descent for reference. We also adapt the gradient restarting heuristic proposed by O’Donoghue and Cand`es in [24], as well as the speed restart heuristic proposed by Su et al. in [28]. The generic restart method is given in Algorithm 2. The restart conditions are the following: (i) gradient restart: x(k+1) −x(k), ∇f(x(k)) > 0, and (ii) speed restart: ∥x(k+1) −x(k)∥< ∥x(k) −x(k−1)∥. The results are given in Figure 1. The accelerated mirror descent method exhibits a polynomial convergence rate, which is empirically faster than the O(1/k2) rate predicted by Theorem 3. The method also exhibits oscillations around the set of minimizers, and increasing the parameter r seems to reduce the period of the oscillations, and results in a trajectory that is initially slower, but faster for large k, see Figure 1-c. The restarting heuristics alleviate the oscillation and empirically speed up the convergence. We also visualized, for each experiment, the trajectory of the iterates x(k) for each method, projected on a 2-dimensional hyperplane. The corresponding videos are included in the supplementary material. 6 Conclusion By combining the Lyapunov argument that motivated mirror descent, and the recent ODE interpretation [28] of Nesterov’s method, we proposed a family of ODE systems for minimizing convex functions with a Lipschitz gradient, which are guaranteed to converge at a O(1/t2) rate, and proved existence and uniqueness of a solution. Then by discretizing the ODE, we proposed a family of accelerated mirror descent methods for constrained optimization and proved an analogous O(1/k2) rate when the step size is small enough. The connection with the continuous-time dynamics motivates a more detailed study of the ODE (5), such as studying the oscillatory behavior of its solution trajectories, its convergence rates under additional assumptions such as strong convexity, and a rigorous study of the restart heuristics. Acknowledgments We gratefully acknowledge the NSF (CCF-1115788, CNS-1238959, CNS-1238962, CNS-1239054, CNS-1239166), the ARC (FL110100281 and ACEMS), and the Simons Institute Fall 2014 Algorithmic Spectral Graph Theory Program. 8 References [1] Zeyuan Allen-Zhu and Lorenzo Orecchia. Linear coupling: An ultimate unification of gradient and mirror descent. In ArXiv, 2014. [2] Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, and Joydeep Ghosh. Clustering with Bregman divergences. J. Mach. Learn. Res., 6:1705–1749, December 2005. [3] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett., 31(3):167–175, May 2003. [4] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009. [5] A. Ben-Tal and A. Nemirovski. Lectures on Modern Convex Optimization. SIAM, 2001. [6] Aharon Ben-Tal, Tamar Margalit, and Arkadi Nemirovski. The ordered subsets mirror descent optimization method with applications to tomography. SIAM J. on Optimization, 12(1):79–108, January 2001. [7] Anthony Bloch, editor. Hamiltonian and gradient flows, algorithms, and control. American Mathematical Society, 1994. [8] A. A. Brown and M. C. Bartholomew-Biggs. Some effective methods for unconstrained optimization based on the solution of systems of ordinary differential equations. Journal of Optimization Theory and Applications, 62(2):211–224, 1989. [9] S´ebastien Bubeck and Nicol`o Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1–122, 2012. [10] J. C. Butcher. Numerical Methods for Ordinary Differential Equations. John Wiley & Sons, Ltd, 2008. [11] Nicol`o Cesa-Bianchi and G´abor Lugosi. Prediction, Learning, and Games. Cambridge, 2006. [12] Ofer Dekel, Ran Gilad-Bachrach, Ohad Shamir, and Lin Xiao. Optimal distributed online prediction. In Proceedings of the 28th International Conference on Machine Learning (ICML), June 2011. [13] U. Helmke and J.B. Moore. Optimization and dynamical systems. Communications and control engineering series. Springer-Verlag, 1994. [14] Anatoli Juditsky. Convex Optimization II: Algorithms, Lecture Notes. 2013. [15] Anatoli Juditsky, Arkadi Nemirovski, and Claire Tauvel. Solving variational inequalities with stochastic mirror-prox algorithm. Stoch. Syst., 1(1):17–58, 2011. [16] H.K. Khalil. Nonlinear systems. Macmillan Pub. Co., 1992. [17] Walid Krichene, Syrine Krichene, and Alexandre Bayen. Efficient Bregman projections onto the simplex. In 54th IEEE Conference on Decision and Control, 2015. [18] A.M. Lyapunov. General Problem of the Stability Of Motion. Control Theory and Applications Series. Taylor & Francis, 1992. [19] A. S. Nemirovsky and D. B. Yudin. Problem Complexity and Method Efficiency in Optimization. WileyInterscience series in discrete mathematics. Wiley, 1983. [20] Yu. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127– 152, 2005. [21] Yu. Nesterov. Gradient methods for minimizing composite functions. Mathematical Programming, 140(1):125–161, 2013. [22] Yurii Nesterov. A method of solving a convex programming problem with convergence rate o(1/k2). Soviet Mathematics Doklady, 27(2):372–376, 1983. [23] Yurii Nesterov. Introductory Lectures on Convex Optimization, volume 87. Springer Science & Business Media, 2004. [24] Brendan O’Donoghue and Emmanuel Cand`es. Adaptive restart for accelerated gradient schemes. Foundations of Computational Mathematics, 15(3):715–732, 2015. [25] M. Raginsky and J. Bouvrie. Continuous-time stochastic mirror descent on a network: Variance reduction, consensus, convergence. In CDC 2012, pages 6793–6800, 2012. [26] R.T. Rockafellar. Convex Analysis. Princeton University Press, 1970. [27] J. Schropp and I. Singer. A dynamical systems approach to constrained minimization. Numerical Functional Analysis and Optimization, 21(3-4):537–551, 2000. [28] Weijie Su, Stephen Boyd, and Emmanuel Cand`es. A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. In NIPS, 2014. [29] Gerald Teschl. Ordinary differential equations and dynamical systems, volume 140. American Mathematical Soc., 2012. 9 | 2015 | 389 |
5,912 | Efficient Non-greedy Optimization of Decision Trees Mohammad Norouzi1∗ Maxwell D. Collins2 ∗ Matthew Johnson3 David J. Fleet4 Pushmeet Kohli5 1,4 Department of Computer Science, University of Toronto 2 Department of Computer Science, University of Wisconsin-Madison 3,5 Microsoft Research Abstract Decision trees and randomized forests are widely used in computer vision and machine learning. Standard algorithms for decision tree induction optimize the split functions one node at a time according to some splitting criteria. This greedy procedure often leads to suboptimal trees. In this paper, we present an algorithm for optimizing the split functions at all levels of the tree jointly with the leaf parameters, based on a global objective. We show that the problem of finding optimal linear-combination (oblique) splits for decision trees is related to structured prediction with latent variables, and we formulate a convex-concave upper bound on the tree’s empirical loss. Computing the gradient of the proposed surrogate objective with respect to each training exemplar is O(d2), where d is the tree depth, and thus training deep trees is feasible. The use of stochastic gradient descent for optimization enables effective training with large datasets. Experiments on several classification benchmarks demonstrate that the resulting non-greedy decision trees outperform greedy decision tree baselines. 1 Introduction Decision trees and forests [5, 21, 4] have a long and rich history in machine learning [10, 7]. Recent years have seen an increase in their popularity, owing to their computational efficiency and applicability to large-scale classification and regression tasks. A case in point is Microsoft Kinect where decision trees are trained on millions of exemplars to enable real-time human pose estimation from depth images [22]. Conventional algorithms for decision tree induction are greedy. They grow a tree one node at a time following procedures laid out decades ago by frameworks such as ID3 [21] and CART [5]. While recent work has proposed new objective functions to guide greedy algorithms [20, 12], it continues to be the case that decision tree applications (e.g., [9, 14]) utilize the same dated methods of tree induction. Greedy decision tree induction builds a binary tree via a recursive procedure as follows: beginning with a single node, indexed by i, a split function si is optimized based on a corresponding subset of the training data Di such that Di is split into two subsets, which in turn define the training data for the two children of the node i. The intrinsic limitation of this procedure is that the optimization of si is solely conditioned on Di, i.e., there is no ability to fine-tune the split function si based on the results of training at lower levels of the tree. This paper proposes a general framework for non-greedy learning of the split parameters for tree-based methods that addresses this limitation. We focus on binary trees, while extension to N-ary trees is possible. We show that our joint optimization of the split functions at different levels of the tree under a global objective not only promotes cooperation between the split nodes to create more compact trees, but also leads to better generalization performance. ∗Part of this work was done while M. Norouzi and M. D. Collins were at Microsoft Research, Cambridge. 1 One of the key contributions of this work is establishing a link between the decision tree optimization problem and the problem of structured prediction with latent variables [25]. We present a novel formulation of the decision tree learning that associates a binary latent decision variable with each split node in the tree and uses such latent variables to formulate the tree’s empirical loss. Inspired by advances in structured prediction [23, 24, 25], we propose a convex-concave upper bound on the empirical loss. This bound acts as a surrogate objective that is optimized using stochastic gradient descent (SGD) to find a locally optimal configuration of the split functions. One complication introduced by this particular formulation is that the number of latent decision variables grows exponentially with the tree depth d. As a consequence, each gradient update will have a complexity of O(2dp) for p-dimensional inputs. One of our technical contributions is showing how this complexity can be reduced to O(d2p) by modifying the surrogate objective, thereby enabling efficient learning of deep trees. 2 Related work Finding optimal split functions at different levels of a decision tree according to some global objective, such as a regularized empirical risk, is NP-complete [11] due to the discrete and sequential nature of the decisions in a tree. Thus, finding an efficient alternative to the greedy approach has remained a difficult objective despite many prior attempts. Bennett [1] proposes a non-greedy multi-linear programming based approach for global tree optimization and shows that the method produces trees that have higher classification accuracy than standard greedy trees. However, their method is limited to binary classification with 0-1 loss and has a high computation complexity, making it only applicable to trees with few nodes. The work in [15] proposes a means for training decision forests in an online setting by incrementally extending the trees as new data points are added. As opposed to a naive incremental growing of the trees, this work models the decision trees with Mondrian Processes. The Hierarchical Mixture of Experts model [13] uses soft splits rather than hard binary decisions to capture situations where the transition from low to high response is gradual. The use of soft splits at internal nodes of the tree yields a probabilistic model in which the log-likelihood is a smooth function of the unknown parameters. Hence, training based on log-likelihood is amenable to numerical optimization via methods such as expectation maximization (EM). That said, the soft splits necessitate the evaluation of all or most of the experts for each data point, so much of the computational advantage of the decision tree are lost. Murthy and Salzburg [17] argue that non-greedy tree learning methods that work by looking ahead are unnecessary and sometimes harmful. This is understandable since their methods work by minimizing the empirical loss without any regularization, which is prone to overfitting. To avoid this problem, it is a common practice (see Breiman [4] or Criminisi and Shotton [7] for an overview) to limit the tree depth and introduce limits on the number of training instances below which a tree branch is not extended, or to force a diverse ensemble of trees (i.e., a decision forest) through the use of bagging. Bennett and Blue [2] describe a different way to overcome overfitting by using max-margin framework and the Support Vector Machines (SVM) at the split nodes of the tree. Subsequently, Bennett et al. [3] show how enlarging the margin of decision tree classifiers results in better generalization performance. Our formulation for decision tree induction improves on prior art in a number of ways. Not only does our latent variable formulation of decision trees enable efficient learning, it can handle any general loss function while not sacrificing the O(dp) complexity of inference imparted by the tree structure. Further, our surrogate objective provides a natural way to regularize the joint optimization of tree parameters to discourage overfitting. 3 Problem formulation For ease of exposition, this paper focuses on binary classification trees, with m internal (split) nodes, and m + 1 leaf (terminal) nodes. Note that in a binary tree the number of leaves is always one more than the number of internal (non-leaf) nodes. An input, x ∈Rp, is directed from the root of the tree down through internal nodes to a leaf node. Each leaf node specifies a distribution over k class labels. Each internal node, indexed by i ∈{1, . . . , m}, performs a binary test by evaluating a node2 +1 h1 -1 h2 θ1 θ2 +1 h3 θ3 θ4 f([+1, −1, +1]T) = [0, 0, 0, 1]T = 14 θ = ΘTf(h) = θ4 -1 h1 +1 h2 θ1 θ2 +1 h3 θ3 θ4 f([−1, +1, +1]T) = [0, 1, 0, 0]T = 12 θ = ΘTf(h) = θ2 Figure 1: The binary split decisions in a decision tree with m = 3 internal nodes can be thought as a binary vector h = [h1, h2, h3]T. Tree navigation to reach a leaf can be expressed in terms of a function f(h). The selected leaf parameters can be expressed by θ = ΘTf(h). specific split function si(x) : Rp →{−1, +1}. If si(x) evaluates to −1, then x is directed to the left child of node i. Otherwise, x is directed to the right child. And so on down the tree. Each split function si(·), by parameterized a weight vector wi, is assumed to be a linear threshold function, i.e., si(x) = sgn(wiTx). We incorporate an offset parameter to obtain split functions of the form sgn(wiTx −bi) by appending a constant “−1” to the input feature vector. Each leaf node, indexed by j ∈{1, . . . , m + 1}, specifies a conditional probability distribution over class labels, l ∈{1, . . . , k}, denoted p(y = l | j). Leaf distributions are parametrized with a vector of unnormalized predictive log-probabilities, denoted θj ∈Rk, and a softmax function; i.e., p(y = l | j) = exp θj[l] Pk α=1 exp θj[α] , (1) where θj[α] denotes the αth element of vector θj. The parameters of the tree comprise the m internal weight vectors, {wi}m i=1, and the m + 1 vectors of unnormalized log-probabilities, one for each leaf node, {θj}m+1 j=1 . We pack these parameters into two matrices W ∈Rm×p and Θ ∈R(m+1)×k whose rows comprise weight vectors and leaf parameters, i.e., W ≡[w1, . . . , wm]T and Θ ≡[θ1, . . . , θm+1]T. Given a dataset of input-output pairs, D ≡{xz, yz}n z=1, where yz ∈{1, . . . , k} is the ground truth class label associated with input xz ∈Rp, we wish to find a joint configuration of oblique splits W and leaf parameters Θ that minimize some measure of misclassification loss on the training dataset. Joint optimization of the split functions and leaf parameters according to a global objective is known to be extremely challenging [11] due to the discrete and sequential nature of the splitting decisions within the tree. One can evaluate all of the split functions, for every internal node of the tree, on input x by computing sgn(Wx), where sgn(·) is the element-wise sign function. One key idea that helps linking decision tree learning to latent structured prediction is to think of an m-bit vector of potential split decisions, e.g., h = sgn(Wx) ∈{−1, +1}m, as a latent variable. Such a latent variable determines the leaf to which a data point is directed, and then classified using the leaf parameters. To formulate the loss for (x, y), we introduce a tree navigation function f : Hm →Im+1 that maps an m-bit sequence of split decisions (Hm ≡{−1, +1}m) to an indicator vector that specifies a 1-of-(m + 1) encoding. Such an indicator vector is only non-zero at the index of the selected leaf. Fig. 1 illustrates the tree navigation function for a tree with 3 internal nodes. Using the notation developed above, θ = ΘTf(sgn(Wx)) represents the parameters corresponding to the leaf to which x is directed by the split functions in W. A generic loss function of the form ℓ(θ, y) measures the discrepancy between the model prediction based on θ and an output y. For the softmax model given by (1), a natural loss is the negative log probability of the correct label, referred to as log loss, ℓ(θ, y) = ℓlog(θ, y) = −θ[y] + log k X β=1 exp(θ[β]) . (2) 3 For regression tasks, when y ∈Rq, and the value of θ ∈Rq is directly emitted as the model prediction, a natural choice of ℓis squared loss, ℓ(θ, y) = ℓsqr(θ, y) = ∥θ −y∥2 . (3) One can adopt other forms of loss within our decision tree learning framework as well. The goal of learning is to find W and Θ that minimize empirical loss, for a given training set D, that is, L(W, Θ; D) = X (x,y)∈D ℓ ΘTf(sgn(Wx)), y . (4) Direct global optimization of empirical loss L(W, Θ; D) with respect to W is challenging. It is a discontinuous and piecewise-constant function of W. Furthermore, given an input x, the navigation function f(·) yields a leaf parameter vector based on a sequence of binary tests, where the results of the initial tests determine which subsequent tests are performed. It is not clear how this dependence of binary tests should be formulated. 4 Decision trees and structured prediction To overcome the intractability in the optimization of L, we develop a piecewise smooth upper bound on empirical loss. Our upper bound is inspired by the formulation of structured prediction with latent variables [25]. A key observation that links decision tree learning to structured prediction, is that one can re-express sgn(Wx) in terms of a latent variable h. That is, sgn(Wx) = argmax h∈Hm (hTWx) . (5) In this form, decision tree’s split functions implicitly map an input x to a binary vector h by maximizing a score function hTWx, the inner product of h and Wx. One can re-express the score function in terms of a more familiar form of a joint feature space on h and x, as wTφ(h, x), where φ(h, x) = vec (hxT), and w = vec (W). Previously, Norouzi and Fleet [19] used the same reformulation (5) of linear threshold functions to learn binary similarity preserving hash functions. Given (5), we re-express empirical loss as, L(W, Θ; D) = X (x,y)∈D ℓ(ΘTf(bh(x)), y) , where bh(x) = argmax h∈Hm (hTWx) . (6) This objective resembles the objective functions used in structured prediction, and since we do not have a priori access to the ground truth split decisions, bh(x), this problem is a form of structured prediction with latent variables. 5 Upper bound on empirical loss We develop an upper bound on loss for an input-output pair (x, y), which takes the form, ℓ(ΘTf(sgn(Wx)), y) ≤ max g∈Hm gTWx + ℓ(ΘTf(g), y) − max h∈Hm(hTWx) . (7) To validate the bound, first note that the second term on the RHS is maximized by h = bh(x) = sgn(Wx). Second, when g = bh(x), it is clear that the LHS equals the RHS. Finally, for all other values of g, the RHS can only get larger than when g = bh(x) because of the max operator. Hence, the inequality holds. An algebraic proof of (7) is presented in the supplementary material. In the context of structured prediction, the first term of the upper bound, i.e., the maximization over g, is called loss-augmented inference, as it augments the inference problem, i.e., the maximization over h, with a loss term. Fortunately, the loss-augmented inference for our decision tree learning formulation can be solved exactly, as discussed below. It is also notable that the loss term on the LHS of (7) is invariant to the scale of W, but the upper bound on the right side of (7) is not. As a consequence, as with binary SVM and margin-rescaling formulations of structural SVM [24], we introduce a regularizer on the norm of W when optimizing the bound. To justify the regularizer, we discuss the effect of the scale of W on the bound. 4 Proposition 1. The upper bound on the loss becomes tighter as a constant multiple of W increases, i.e., for a > b > 0: max g∈Hm agTWx + ℓ(ΘTf(g), y) −max h∈Hm(ahTWx) ≤ max g∈Hm bgTWx + ℓ(ΘTf(g), y) −max h∈Hm(bhTWx). (8) Proof. Please refer to the supplementary material for the proof. In the limit, as the scale of W approach +∞, the loss term ℓ(ΘTf(g), y) becomes negligible compared to the score term gTWx. Thus, the solutions to loss-augmented inference and inference problems become almost identical, except when an element of Wx is very close to 0. Thus, even though a larger ∥W∥yields a tighter bound, it makes the bound approach the loss itself, and therefore becomes nearly piecewise-constant, which is hard to optimize. Based on Proposition 1, one easy way to decrease the upper bound is to increase the norm of W, which does not affect the loss. Our experiments indicate that a lower value of the loss can be achieved when the norm of W is regularized. We therefore constrain the norm of W to obtain an objective with better generalization. Since each row of W acts independently in a decision tree in the split functions, it is reasonable to constrain the norm of each row independently. Summing over the bounds for different training pairs and constraining the norm of rows of W, we obtain the following optimization problem, called the surrogate objective: minimize L′(W, Θ; D) = X (x,y)∈D max g∈Hm gTWx + ℓ(ΘTf(g), y) −max h∈Hm(hTWx) s.t. ∥wi∥2 ≤ν for all i ∈{1, . . . , m} (9) where ν ∈R+ is a regularization parameter and wi is the ith row of W. For all values of ν, we have L(W, Θ; D) ≤L′(W, Θ; D). Instead of using the typical Lagrange form for regularization, we employ hard constraints to enable sparse gradient updates of the rows of W, since the gradients for most rows of W are zero at each step in training. 6 Optimizing the surrogate objective Even though minimizing the surrogate objective of (9) entails a non-convex optimization, L′(W, Θ; D) is much better behaved than empirical loss in (4). L′(W, Θ; D) is piecewise linear and convex-concave in W, and the constraints on W define a convex set. Loss-augmented inference. To evaluate and use the surrogate objective in (9) for optimization, we must solve a loss-augmented inference problem to find the binary code that maximizes the sum of the score and loss terms: bg(x) = argmax g∈Hm gTWx + ℓ(ΘTf(g), y) . (10) An observation that makes this optimization tractable is that f(g) can only take on m+1 distinct values, which correspond to terminating at one of the m+1 leaves of the tree and selecting a leaf parameter from {θj}m+1 j=1 . Fortunately, for any leaf index j ∈{1, . . . , m+1}, we can solve argmax g∈Hm gTWx + ℓ(θj, y) s. t. f(g) = 1j , (11) efficiently. Note that if f(g) = 1j, then ΘTf(g) equals the jth row of Θ, i.e., θj. To solve (11) we need to set all of the binary bits in g corresponding to the path from the root to the leaf j to be consistent with the path direction toward the leaf j. However, bits of g that do not appear on this path have no effect on the output of f(g), and all such bits should be set based on g[i] = sgn(wiTx) to obtain maximum gTWx. Accordingly, we can essentially ignore the off-the-path bits by subtracting sgn(Wx)TWx from (11) to obtain, argmax g∈Hm gTWx + ℓ(θj, y) = argmax g∈Hm g −sgn(Wx) TWx + ℓ(θj, y) . (12) 5 Algorithm 1 Stochastic gradient descent (SGD) algorithm for non-greedy decision tree learning. 1: Initialize W (0) and Θ(0) using greedy procedure 2: for t = 0 to τ do 3: Sample a pair (x, y) uniformly at random from D 4: bh ←sgn(W (t)x) 5: bg ←argmaxg∈Hm gTW (t)x + ℓ(ΘTf(g), y) 6: W (tmp) ←W (t) −η bgxT + η bhxT 7: for i = 1 to m do 8: W (t+1) i, . ←min n 1, √ν .
W (tmp) i, .
2 o W (tmp) i, . 9: end for 10: Θ(t+1) ←Θ(t) −η ∂ ∂Θℓ(ΘTf(bg), y) Θ=Θ(t) 11: end for Note that sgn(Wx)TWx is constant in g, and this subtraction zeros out all bits in g that are not on the path to the leaf j. So, to solve (12), we only need to consider the bits on the path to the leaf j for which sgn(wiTx) is not consistent with the path direction. Using a single depth-first search on the decision tree, we can solve (11) for every j, and among those, we pick the one that maximizes (11). The algorithm described above is O(mp) ⊆O(2dp), where d is the tree depth, and we require a multiple of p for computing the inner product wix at each internal node i. This algorithm is not efficient for deep trees, especially as we need to perform loss-augmented inference once for every stochastic gradient computation. In what follows, we develop an alternative more efficient formulation and algorithm with time complexity of O(d2p). Fast loss-augmented inference. To motivate the fast loss-augmented inference algorithm, we formulate a slightly different upper bound on the loss, i.e., ℓ(ΘTf(sgn(Wx)), y) ≤ max g∈B1(sgn(W x)) gTWx + ℓ(ΘTf(g), y) −max h∈Hm hTWx , (13) where B1(sgn(Wx)) denotes the Hamming ball of radius 1 around sgn(Wx), i.e., B1(sgn(Wx)) ≡ {g ∈Hm | ∥g −sgn(Wx)∥H ≤1}, hence g ∈B1(sgn(Wx)) implies that g and sgn(Wx) differ in at most one bit. The proof of (13) is identical to the proof of (7). The key benefit of this new formulation is that loss-augmented inference with the new bound is computationally efficient. Since bg and sgn(Wx) differ in at most one bit, then f(bg) can only take d + 1 distinct values. Thus we need to evaluate (12) for at most d + 1 values of j, requiring a running time of O(d2p). Stochastic gradient descent (SGD). One reasonable approach to minimizing (9) uses stochastic gradient descent (SGD), the steps of which are outlined in Alg 1. Here, η denotes the learning rate, and τ is the number of optimization steps. Line 6 corresponds to a gradient update in W, which is supported by the fact that ∂ ∂W hTWx = hxT. Line 8 performs projection back to the feasible region of W, and Line 10 updates Θ based on the gradient of loss. Our implementation modifies Alg 1 by adopting common SGD tricks, including the use of momentum and mini-batches. Stable SGD (SSGD). Even though Alg 1 achieves good training and test accuracy relatively quickly, we observe that after several gradient updates some of the leaves may end up not being assigned to any data points and hence the full tree capacity is not exploited. We call such leaves inactive as opposed to active leaves that are assigned to at least one training data point. An inactive leaf may become active again, but this rarely happens given the form of gradient updates. To discourage abrupt changes in the number of inactive leaves, we introduce a variant of SGD, in which the assignments of data points to leaves are fixed for a number of gradient updates. Thus, the bound is optimized with respect to a set of data point leaf assignment constraints. When the improvement in the bound becomes negligible the leaf assignment variables are updated, followed by another round of optimization of the bound. We call this algorithm Stable SGD (SSGD) because it changes the assignment of data points to leaves more conservatively than SGD. Let a(x) denote the 1-of-(m + 1) encoding of the leaf to which a data point x should be assigned to. Then, each iteration of SSGD 6 SensIT Connect4 Protein MNIST 6 10 14 18 Depth 0.6 0.7 0.8 Test accuracy 6 10 14 18 Depth 0.5 0.6 0.7 0.8 6 10 14 18 Depth 0.4 0.5 0.6 0.7 6 10 14 18 Depth 0.5 0.6 0.7 0.8 0.9 6 10 14 18 Depth 0.5 0.6 0.7 0.8 0.9 1.0 Training accuracy 6 10 14 18 Depth 0.5 0.6 0.7 0.8 0.9 1.0 6 10 14 18 Depth 0.5 0.6 0.7 0.8 0.9 1.0 6 10 14 18 Depth 0.6 0.7 0.8 0.9 1.0 Axis-aligned CO2 Non-greedy Random OC1 Figure 2: Test and training accuracy of a single tree as a function of tree depth for different methods. Non-greedy trees achieve better test accuracy throughout different depths. Non-greedy exhibit less vulnerability to overfitting. with fast loss-augmented inference relies on the following upper bound on loss, ℓ(ΘTf(sgn(Wx)), y) ≤ max g∈B1(sgn(W x)) gTWx + ℓ(ΘTf(g), y) − max h∈Hm|f(h)=a(x) hTWx . (14) One can easily verify that the RHS of (14) is larger than the RHS of (13), hence the inequality. Computational complexity. To analyze the computational complexity of each SGD step, we note that Hamming distance between bg (defined in (10)) and bh = sgn(Wx) is bounded above by the depth of the tree d. This is because only those elements of bg corresponding to the path to a selected leaf can differ from sgn(Wx). Thus, for SGD the expression (bg −bh) xT needed for Line 6 of Alg 1 can be computed in O(dp), if we know which bits of bh and bg differ. Accordingly, Lines 6 and 7 can be performed in O(dp). The computational bottleneck is the loss augmented inference in Line 5. When fast loss-augmented inference is performed in O(d2p) time, the total time complexity of gradient update for both SGD and SSGD becomes O(d2p + k), where k is the number of labels. 7 Experiments Experiments are conducted on several benchmark datasets from LibSVM [6] for multi-class classification, namely SensIT, Connect4, Protein, and MNIST. We use the provided train; validation; test sets when available. If such splits are not provided, we use a random 80%/20% split of the training data for train; validation and a random 64%/16%/20% split for train; validation; test sets. We compare our method for non-greedy learning of oblique trees with several greedy baselines, including conventional axis-aligned trees based on information gain, OC1 oblique trees [17] that use coordinate descent for optimization of the splits, and random oblique trees that select the best split function from a set of randomly generated hyperplanes based on information gain. We also compare with the results of CO2 [18], which is a special case of our upper bound approach applied greedily to trees of depth 1, one node at a time. Any base algorithm for learning decision trees can be augmented by post-training pruning [16], or building ensembles with bagging [4] or boosting [8]. However, the key differences between non-greedy trees and baseline greedy trees become most apparent when analyzing individual trees. For a single tree the major determinant of accuracy is the size of the tree, which we control by changing the maximum tree depth. Fig. 2 depicts test and training accuracy for non-greedy trees and four other baselines as function of tree depth. We evaluate trees of depth 6 up to 18 at depth intervals of 2. The hyper-parameters for each method are tuned for each depth independently. While the absolute accuracy of our non-greedy trees varies between datasets, a few key observations hold for all cases. First, we observe that non7 100 101 102 103 0 1,000 2,000 3,000 4,000 Regularization parameter ν (log) Num. active leaves Tree depth d =10 100 101 102 103 0 1,000 2,000 3,000 4,000 Regularization parameter ν (log) Tree depth d =13 100 101 102 103 0 1,000 2,000 3,000 4,000 Regularization parameter ν (log) Tree depth d =16 Figure 3: The effect of ν on the structure of the trees trained by MNIST. A small value of ν prunes the tree to use far fewer leaves than an axis-aligned baseline used for initialization (dotted line). greedy trees achieve the best test performance across tree depths across multiple datasets. Secondly, trees trained using our non-greedy approach seem to be less susceptible to overfitting and achieve better generalization performance at various tree depths. As described below, we think that the norm regularization provides a principled way to tune the tightness of the tree’s fit to the training data. Finally, the comparison between non-greedy and CO2 [18] trees concentrates on the non-greediness of the algorithm, as it compares our method with its simpler variant, which is applied greedily one node at a time. We find that in most cases, the non-greedy optimization helps by improving upon the results of CO2. 6 10 14 18 Depth 0 300 600 900 1200 1500 1800 Training time (sec) Loss-aug inf Fast loss-aug inf Figure 4: Total time to execute 1000 epochs of SGD on the Connect4 dataset using loss-agumented inference and its fast varient. A key hyper-parameter of our method is the regularization constant ν in (9), which controls the tightness of the upper bound. With a small ν, the norm constraints force the method to choose a W with a large margin at each internal node. The choice of ν is therefore closely related to the generalization of the learned trees. As shown in Fig. 3, ν also implicitly controls the degree of pruning of the leaves of the tree during training. We train multiple trees for different values of ν ∈{0.1, 1, 4, 10, 43, 100}, and we pick the value of ν that produces the tree with minimum validation error. We also tune the choice of the SGD learning rate, η, in this step. This ν and η are used to build a tree using the union of both the training and validation sets, which is evaluated on the test set. To build non-greedy trees, we initially build an axis-aligned tree with split functions that threshold a single feature optimized using conventional procedures that maximize information gain. The axisaligned split is used to initialize a greedy variant of the tree training procedure called CO2 [18]. This provides initial values for W and Θ for the non-greedy procedure. Fig. 4 shows an empirical comparison of training time for SGD with loss-augmented inference and fast loss-augmented inference. As expected, run-time of loss-augmented inference exhibits exponential growth with deep trees whereas its fast variant is much more scalable. We expect to see much larger speedup factors for larger datasets. Connect4 only has 55, 000 training points. 8 Conclusion We present a non-greedy method for learning decision trees, using stochastic gradient descent to optimize an upper bound on the empirical loss of the tree’s predictions on the training set. Our model poses the global training of decision trees in a well-characterized optimization framework. This makes it simpler to pose extensions that could be considered in future work. Efficiency gains could be achieved by learning sparse split functions via sparsity-inducing regularization on W. Further, the core optimization problem permits applying the kernel trick to the linear split parameters W, making our overall model applicable to learning higher-order split functions or training decision trees on examples in arbitrary Reproducing Kernel Hilbert Spaces. Acknowledgment. MN was financially supported in part by a Google fellowship. DF was financially supported in part by NSERC Canada and the NCAP program of the CIFAR. 8 References [1] K. P. Bennett. Global tree optimization: A non-greedy decision tree algorithm. Computing Science and Statistics, pages 156–156, 1994. [2] K. P. Bennett and J.A. Blue. A support vector machine approach to decision trees. In Department of Mathematical Sciences Math Report No. 97-100, Rensselaer Polytechnic Institute, pages 2396–2401, 1997. [3] K. P. Bennett, N. Cristianini, J. Shawe-Taylor, and D. Wu. Enlarging the margins in perceptron decision trees. Machine Learning, 41(3):295–313, 2000. [4] L. Breiman. Random forests. Machine Learning, 45(1):5–32, 2001. [5] L. Breiman, J. Friedman, R. A. Olshen, and C. J. Stone. Classification and regression trees. Chapman & Hall/CRC, 1984. [6] C. C. Chang and C. J. Lin. LIBSVM: a library for support vector machines, 2001. [7] A. Criminisi and J. Shotton. Decision Forests for Computer Vision and Medical Image Analysis. Springer, 2013. [8] Jerome H Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, pages 1189–1232, 2001. [9] J. Gall, A. Yao, N. Razavi, L. Van Gool, and V. Lempitsky. Hough forests for object detection, tracking, and action recognition. IEEE Trans. PAMI, 33(11):2188–2202, 2011. [10] T. Hastie, R. Tibshirani, and J. Friedman. The elements of statistical learning (Ed. 2). Springer, 2009. [11] L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete. Information Processing Letters, 5(1):15–17, 1976. [12] J. Jancsary, S. Nowozin, and C. Rother. Loss-specific training of non-parametric image restoration models: A new state of the art. ECCV, 2012. [13] M. I. Jordan and R. A. Jacobs. Hierarchical mixtures of experts and the em algorithm. Neural Comput., 6(2):181–214, 1994. [14] E. Konukoglu, B. Glocker, D. Zikic, and A. Criminisi. Neighbourhood approximation forests. In Medical Image Computing and Computer-Assisted Intervention–MICCAI 2012, pages 75–82. Springer, 2012. [15] B. Lakshminarayanan, D. M. Roy, and Y. H. Teh. Mondrian forests: Efficient online random forests. In Advances in Neural Information Processing Systems, pages 3140–3148, 2014. [16] J. Mingers. An empirical comparison of pruning methods for decision tree induction. Machine Learning, 4(2):227–243, 1989. [17] S. K. Murthy and S. L. Salzberg. On growing better decision trees from data. PhD thesis, John Hopkins University, 1995. [18] M. Norouzi, M. D. Collins, D. J. Fleet, and P. Kohli. Co2 forest: Improved random forest by continuous optimization of oblique splits. arXiv:1506.06155, 2015. [19] M. Norouzi and D. J. Fleet. Minimal Loss Hashing for Compact Binary Codes. ICML, 2011. [20] S. Nowozin. Improved information gain estimates for decision tree induction. ICML, 2012. [21] J. R. Quinlan. Induction of decision trees. Machine learning, 1(1):81–106, 1986. [22] J. Shotton, R. Girshick, A. Fitzgibbon, T. Sharp, M. Cook, M. Finocchio, R. Moore, P. Kohli, A. Criminisi, A. Kipman, et al. Efficient human pose estimation from single depth images. IEEE Trans. PAMI, 2013. [23] B. Taskar, C. Guestrin, and D. Koller. Max-margin Markov networks. NIPS, 2003. [24] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. ICML, 2004. [25] C. N. J. Yu and T. Joachims. Learning structural SVMs with latent variables. ICML, 2009. 9 | 2015 | 39 |
5,913 | Accelerated Proximal Gradient Methods for Nonconvex Programming Huan Li Zhouchen Lin B Key Lab. of Machine Perception (MOE), School of EECS, Peking University, P. R. China Cooperative Medianet Innovation Center, Shanghai Jiaotong University, P. R. China lihuanss@pku.edu.cn zlin@pku.edu.cn Abstract Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge. Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a critical point in nonconvex programming. In this paper, we extend APG for general nonconvex and nonsmooth programs by introducing a monitor that satisfies the sufficient descent property. Accordingly, we propose a monotone APG and a nonmonotone APG. The latter waives the requirement on monotonic reduction of the objective function and needs less computation in each iteration. To the best of our knowledge, we are the first to provide APG-type algorithms for general nonconvex and nonsmooth problems ensuring that every accumulation point is a critical point, and the convergence rates remain O 1 k2 when the problems are convex, in which k is the number of iterations. Numerical results testify to the advantage of our algorithms in speed. 1 Introduction In recent years, sparse and low rank learning has been a hot research topic and leads to a wide variety of applications in signal/image processing, statistics and machine learning. l1-norm and nuclear norm, as the continuous and convex surrogates of l0-norm and rank, respectively, have been used extensively in the literature. See e.g., the recent collections [1]. Although l1-norm and nuclear norm have achieved great success, in many cases they are suboptimal as they can promote sparsity and low-rankness only under very limited conditions [2, 3]. To address this issue, many nonconvex regularizers have been proposed, such as lp-norm [4], Capped-l1 penalty [3], Log-Sum Penalty [2], Minimax Concave Penalty [5], Geman Penalty [6], Smoothly Clipped Absolute Deviation [7] and Schatten-p norm [8]. This trend motivates a revived interest in the analysis and design of algorithms for solving nonconvex and nonsmooth problems, which can be formulated as min x∈Rn F(x) = f(x) + g(x), (1) where f is differentiable (it can be nonconvex) and g can be both nonconvex and nonsmooth. Accelerated gradient methods have been at the heart of convex optimization research. In a series of celebrated works [9, 10, 11, 12, 13, 14], several accelerated gradient methods are proposed for problem (1) with convex f and g. In these methods, k iterations are sufficient to find a solution within O 1 k2 error from the optimal objective value. Recently, Ghadimi and Lan [15] presented a unified treatment of accelerated gradient method (UAG) for convex, nonconvex and stochastic optimiza1 Table 1: Comparisons of GD (General Descent Method), iPiano, GIST, GDPA, IR, IFB, APG, UAG and our method for problem (1). The measurements include the assumption, whether the methods accelerate for convex programs (CP) and converge for nonconvex programs (NCP). Method name Assumption Accelerate (CP) converge (NCP) GD [16, 17] f + g: KL No Yes iPiano [18] nonconvex f, convex g No Yes GIST [19] nonconvex f, g = g1 −g2, g1, g2 convex No Yes GDPA [20] nonconvex f, g = g1 −g2, g1, g2 convex No Yes IR [8, 21] special f and g No Yes IFB [22] nonconvex f, nonconvex g No Yes APG [12, 13] convex f, convex g Yes Unclear UAG [15] nonconvex f, convex g Yes Yes Ours nonconvex f, nonconvex g Yes Yes tion. They proved that their algorithm converges1 in nonconvex programming with nonconvex f but convex g and accelerates with an O 1 k2 convergence rate in convex programming for problem (1). Convergence rate about the gradient mapping is also analyzed in [15]. Attouch et al. [16] proposed a unified framework to prove the convergence of a general class of descent methods using the Kurdyka-Łojasiewicz (KL) inequality for problem (1) and Frankel et al. [17] studied the convergence rates of general descent methods under the assumption that the desingularising function ϕ in KL property has the form of C θ tθ. A typical example in their framework is the proximal gradient method. However, there is no literature showing that there exists an accelerated gradient method satisfying the conditions in their framework. Other typical methods for problem (1) includes Inertial Forward-Backward (IFB) [22], iPiano [18], General Iterative Shrinkage and Thresholding (GIST) [19], Gradient Descent with Proximal Average(GDPA) [20] and Iteratively Reweighted Algorithms (IR) [8, 21]. Table 1 demonstrates that the existing methods are not ideal. GD and IFB cannot accelerate the convergence for convex programs. GIST and GDPA require that g should be explicitly written as a difference of two convex functions. iPiano demands the convexity of g and IR is suitable for some special cases of problem (1). APG can accelerate the convergence for convex programs, however, it is unclear whether APG can converge to critical points for nonconvex programs. UAG can ensure the convergence for nonconvex programming, however, it requires g to be convex. This restricts the applications of UAG to solving nonconvexly regularized problems, such as sparse and low rank learning. To the best of our knowledge, extending the accelerated gradient method for general nonconvex and nonsmooth programs while keeping the O 1 k2 convergence rate in the convex case remains an open problem. In this paper we aim to extend Beck and Teboulle’s APG [12, 13] to solve general nonconvex and nonsmooth problem (1). APG first extrapolates a point yk by combining the current point and the previous point, then solves a proximal mapping problem. When extending APG to nonconvex programs the chief difficulty lies in the extrapolated point yk. We have little restriction on F(yk) when the convexity is absent. In fact, F(yk) can be arbitrarily larger than F(xk) when yk is a bad extrapolation, especially when F is oscillatory. When xk+1 is computed by a proximal mapping at a bad yk, F(xk+1) may also be arbitrarily larger than F(xk). Beck and Teboulle’s monotone APG [12] ensures F(xk+1) ≤F(xk). However, this is not enough to ensure the convergence to critical points. To address this issue, we introduce a monitor satisfying the sufficient descent property to prevent a bad extrapolation of yk and then correct it by this monitor. In summary, our contributions include: 1. We propose APG-type algorithms for general nonconvex and nonsmooth programs (1). We first extend Beck and Teboulle’s monotone APG [12] by replacing their descent condition with sufficient descent condition. This critical change ensures that every accumulation point is a critical point. Our monotone APG satisfies some modified conditions for the framework of [16, 17] and thus stronger results on convergence rate can be obtained under the KL 1Except for the work under the KL assumption, convergence for nonconvex problems in this paper and the references of this paper means that every accumulation point is a critical point. 2 assumption. Then we propose a nonmonotone APG, which allows for larger stepsizes when line search is used and reduces the average number of proximal mappings in each iteration. Thus it can further speed up the convergence in practice. 2. For our APGs, the convergence rates maintain O 1 k2 when the problems are convex. This result is of great significance when the objective function is locally convex in the neighborhoods of local minimizers even if it is globally nonconvex. 2 Preliminaries 2.1 Basic Assumptions Note that a function g : Rn →(−∞, +∞] is said to be proper if dom g ̸= ∅, where dom g = {x ∈R : g(x) < +∞}. g is lower semicontinuous at point x0 if lim infx→x0 g(x) ≥g(x0). In problem (1), we assume that f is a proper function with Lipschitz continuous gradients and g is proper and lower semicontinuous. We assume that F(x) is coercive, i.e., F is bounded from below and F(x) →∞when ∥x∥→∞, where ∥· ∥is the l2-norm. 2.2 KL Inequality Definition 1. [23] A function f : Rn →(−∞, +∞] is said to have the KL property at u ∈ dom∂f := {x ∈Rn : ∂f(u) ̸= ∅} if there exists η ∈(0, +∞], a neighborhood U of u and a function ϕ ∈Φη, such that for all u ∈U T{u ∈Rn : f(u) < f(u) < f(u) + η}, the following inequality holds ϕ′(f(u) −f(u))dist(0, ∂f(u)) > 1, (2) where Φη stands for a class of function ϕ : [0, η) →R+ satisfying: (1) ϕ is concave and C1 on (0, η); (2) ϕ is continuous at 0, ϕ(0) = 0; and (3) ϕ′(x) > 0, ∀x ∈(0, η). All semi-algebraic functions and subanalytic functions satisfy the KL property. Specially, the desingularising function ϕ(t) of semi-algebraic functions can be chosen to be the form of C θ tθ with θ ∈(0, 1]. Typical semi-algebraic functions include real polynomial functions, ∥x∥p with p ≥0, rank(X), the indicator function of PSD cone, Stiefel manifolds and constant rank matrices [23]. 2.3 Review of APG in the Convex Case We first review APG in the convex case. Bech and Teboulle [13] extend Nesterov’s accelerated gradient method to the nonsmooth case. It is named the Accelerated Proximal Gradient method and consists of the following steps: yk = xk + tk−1 −1 tk (xk −xk−1), (3) xk+1 = proxαkg(yk −αk∇f(yk)), (4) tk+1 = p 4(tk)2 + 1 + 1 2 , (5) where the proximal mapping is defined as proxαg(x) = argminu g(u) + 1 2α∥x −u∥2. APG is not a monotone algorithm, which means that F(xk+1) may not be smaller than F(xk). So Beck and Teboulle [12] further proposed a monotone APG, which consists of the following steps: yk = xk + tk−1 tk (zk −xk) + tk−1 −1 tk (xk −xk−1), (6) zk+1 = proxαkg(yk −αk∇f(yk)), (7) tk+1 = p 4(tk)2 + 1 + 1 2 , (8) xk+1 = zk+1, if F(zk+1) ≤F(xk), xk, otherwise. (9) 3 3 APGs for Nonconvex Programs In this section, we propose two APG-type algorithms for general nonconvex nonsmooth problems. We establish the convergence in the nonconvex case and the O 1 k2 convergence rate in the convex case. When the KL property is satisfied we also provide stronger results on convergence rate. 3.1 Monotone APG We give two reasons that result in the difficulty of convergence analysis on the usual APG [12, 13] for nonconvex programs: (1) yk may be a bad extrapolation, (2) in [12] only descent property, F(xk+1) ≤F(xk), is ensured. To address these issues, we need to monitor and correct yk when it has the potential to fail, and the monitor should enjoy the property of sufficient descent which is critical to ensure the convergence to a critical point. As is known, proximal gradient methods can make sure sufficient descent [16] (cf. (15)). So we use a proximal gradient step as the monitor. More specially, our algorithm consists of the following steps: yk = xk + tk−1 tk (zk −xk) + tk−1 −1 tk (xk −xk−1), (10) zk+1 = proxαyg(yk −αy∇f(yk)), (11) vk+1 = proxαxg(xk −αx∇f(xk)), (12) tk+1 = p 4(tk)2 + 1 + 1 2 , (13) xk+1 = zk+1, if F(zk+1) ≤F(vk+1), vk+1, otherwise. (14) where αy and αx can be fixed constants satisfying αy < 1 L and αx < 1 L, or dynamically computed by backtracking line search initialized by Barzilai-Borwein rule2. L is the Lipschitz constant of ∇f. Our algorithm is an extension of Beck and Teboulle’s monotone APG [12]. The difference lies in the extra v, as the role of monitor, and the correction step of x-update. In (9) F(zk+1) is compared with F(xk), while in (14) F(zk+1) is compared with F(vk+1). A further difference is that Beck and Teboulle’s algorithm only ensures descent while our algorithm makes sure sufficient descent, which means F(xk+1) ≤F(xk) −δ∥vk+1 −xk∥2, (15) where δ > 0 is a small constant. It is not difficult to understand that only the descent property cannot ensure the convergence to a critical point in nonconvex programming. We present our convergence result in the following theorem3. Theorem 1. Let f be a proper function with Lipschitz continuous gradients and g be proper and lower semicontinuous. For nonconvex f and nonconvex nonsmooth g, assume that F(x) is coercive. Then {xk} and {vk} generated by (10)-(14) are bounded. Let x∗be any accumulation point of {xk}, we have 0 ∈∂F(x∗), i.e., x∗is a critical point. A remarkable aspect of our algorithm is that although we have made some modifications on Beck and Teboulle’s algorithm, the O 1 k2 convergence rate in the convex case still holds. Similar to Theorem 5.1 in [12], we have the following theorem on the accelerated convergence in the convex case: Theorem 2. For convex f and g, assume that ∇f is Lipschitz continuous, let x∗be any global optimum, then {xk} generated by (10)-(14) satisfies F(xN+1) −F(x∗) ≤ 2 αy(N + 1)2 ∥x0 −x∗∥2, (16) When the objective function is locally convex in the neighborhood of local minimizers, Theorem 2 means that APG can ensure to have an O 1 k2 convergence rate when approaching to a local minimizer, thus accelerating the convergence. For better reference, we summarize the proposed monotone APG algorithm in Algorithm 1. 2For the detail of line search with Barzilai-Borwein initializtion please see Supplementary Materials. 3The proofs in this paper can be found in Supplementary Materials. 4 Algorithm 1 Monotone APG Initialize z1 = x1 = x0, t1 = 1, t0 = 0, αy < 1 L, αx < 1 L. for k = 1, 2, 3, · · · do update yk, zk+1, vk+1, tk+1 and xk+1 by (10)-(14). end for 3.2 Convergence Rate under the KL Assumption The KL property is a powerful tool and is studied by [16], [17] and [23] for a class of general descent methods. The usual APG in [12, 13] does not satisfy the sufficient descent property, which is crucial to use the KL property, and thus has no conclusions under the KL assumption. On the other hand, due to the intermediate variables yk, vk and zk, our algorithm is more complex than the general descent methods and also does not satisfy the conditions therein. However, due to the monitor-corrector step (12) and (14), some modified conditions4 can be satisfied and we can still get some exciting results under the KL assumption. With the same framework of [17], we have the following theorem. Theorem 3. Let f be a proper function with Lipschitz continuous gradients and g be proper and lower semicontinuous. For nonconvex f and nonconvex nonsmooth g, assume that F(x) is coercive. If we further assume that f and g satisfy the KL property and the desingularising function has the form of ϕ(t) = C θ tθ for some C > 0, θ ∈(0, 1], then 1. If θ = 1, then there exists k1 such that F(xk) = F ∗for all k > k1 and the algorithm terminates in finite steps. 2. If θ ∈[ 1 2, 1), then there exists k2 such that for all k > k2, F(xk) −F ∗≤ d1C2 1 + d1C2 k−k2 rk2. (17) 3. If θ ∈(0, 1 2), then there exists k3 such that for all k > k3, F(xk) −F ∗≤ C (k −k3)d2(1 −2θ) 1 1−2θ , (18) where F ∗is the same function value at all the accumulation points of {xk}, rk = F(vk)− F ∗, d1 = 1 αx + L 2 / 1 2αx −L 2 and d2 = min n 1 2d1C , C 1−2θ 2 2θ−1 2θ−2 −1 r2θ−1 0 o When F(x) is a semi-algebraic function, the desingularising function ϕ(t) can be chosen to be the form of C θ tθ with θ ∈(0, 1] [23]. In this case, as shown in Theorem 3, our algorithm converges in finite iterations when θ = 1, converges with a linear rate when θ ∈[ 1 2, 1) and a sublinear rate (at least O( 1 k)) when θ ∈(0, 1 2) for the gap F(xk) −F ∗. This is the same as the results mentioned in [17], although our algorithm does not satisfy the conditions therein. 3.3 Nonmonotone APG Algorithm 1 is a monotone algorithm. When the problem is ill-conditioned, a monotone algorithm has to creep along the bottom of a narrow curved valley so that the objective function value does not increase, resulting in short stepsizes or even zigzagging and hence slow convergence [24]. Removing the requirement on monotonicity can improve convergence speed because larger stepsizes can be adopted when line search is used. On the other hand, in Algorithm 1 we need to compute zk+1 and vk+1 in each iteration and use vk+1 to monitor and correct zk+1. This is a conservative strategy. In fact, we can accept zk+1 as xk+1 directly if it satisfies some criterion showing that yk is a good extrapolation. Then vk+1 is computed only when this criterion is not met. Thus, we can reduce the average number of proximal 4For the details of difference please see Supplementary Materials. 5 mappings, accordingly the computation cost, in each iteration. So in this subsection we propose a nonmonotone APG to speed up convergence. In monotone APG, (15) is ensured. In nonmonotone APG, we allow xk+1 to make a larger objective function value than F(xk). Specifically, we allow xk+1 to yield an objective function value smaller than ck, a relaxation of F(xk). ck should not be too far from F(xk). So the average of F(xk), F(xk−1), · · · , F(x1) is a good choice. Thus we follow [24] to define ck as a convex combination of F(xk), F(xk−1), · · · , F(x1) with exponentially decreasing weights: ck = Pk j=1 ηk−jF(xj) Pk j=1 ηk−j , (19) where η ∈[0, 1) controls the degree of nonmonotonicity. In practice ck can be efficiently computed by the following recursion: qk+1 = ηqk + 1, (20) ck+1 = ηqkck + F(xk+1) qk+1 , (21) where q1 = 1 and c1 = F(x1). According to (14), we can split (15) into two parts by the different choices of xk+1. Accordingly, in nonmonotone APG we consider the following two conditions to replace (15): F(zk+1) ≤ck −δ∥zk+1 −yk∥2, (22) F(vk+1) ≤ck −δ∥vk+1 −xk∥2. (23) We choose (22) as the criteria mentioned before. When (22) holds, we deem that yk is a good extrapolation and accept zk+1 directly. Then we do not compute vk+1 in this case. However, (22) does not hold all the time. When it fails, we deem that yk may not be a good extrapolation. In this case, we compute vk+1 by (12) satisfying (23), and then monitor and correct zk+1 by (14). (23) is ensured when αx ≤1/L. When backtracking line search is used, such vk+1 that satisfies (23) can be found in finite steps5. Combing (20), (21), (22) and xk+1 = zk+1 we have ck+1 ≤ck −δ∥xk+1 −yk∥2 qk+1 . (24) Similarly, replacing (22) and xk+1 = zk+1 by (23) and xk+1 = vk+1, respectively, we have ck+1 ≤ck −δ∥xk+1 −xk∥2 qk+1 . (25) This means that we replace the sufficient descent condition of F(xk) in (15) by the sufficient descent of ck. We summarize the nonmonotone APG in Algorithm 26. Similar to monotone APG, nonmonotone APG also enjoys the convergence property in the nonconvex case and the O 1 k2 convergence rate in the convex case. We present our convergence result in Theorem 4. Theorem 2 still holds for Algorithm 2 with no modification. So we omit it here. Define Ω1 = {k1, k2, · · · , kj, · · · } and Ω2 = {m1, m2, · · · , mj, · · · }, such that in Algorithm 2, (22) holds and xk+1 = zk+1 is executed for all k = kj ∈Ω1. For all k = mj ∈Ω2, (22) does not hold and (14) is executed. Then we have Ω1 T Ω2 = ∅, Ω1 S Ω2 = {1, 2, 3, · · · , } and the following theorem holds. Theorem 4. Let f be a proper function with Lipschitz continuous gradients and g be proper and lower semicontinuous. For nonconvex f and nonconvex nonsmooth g, assume that F(x) is coercive. Then {xk}, {vk} and {ykj} where kj ∈Ω1 generated by Algorithm 2 are bounded, and 1. if Ω1 or Ω2 is finite, then for any accumulation point {x∗} of {xk}, we have 0 ∈∂F(x∗). 5See Lemma 2 in Supplementary Materials. 6Please see Supplementary Materials for nonmonotone APG with line search. 6 Algorithm 2 Nonmonotone APG Initialize z1 = x1 = x0, t1 = 1, t0 = 0, η ∈[0, 1), δ > 0, c1 = F(x1), q1 = 1, αx < 1 L, αy < 1 L. for k = 1, 2, 3, · · · do yk = xk + tk−1 tk (zk −xk) + tk−1−1 tk (xk −xk−1), zk+1 = proxαyg(yk −αy∇f(yk)) if F(zk+1) ≤ck −δ∥zk+1 −yk∥2 then xk+1 = zk+1. else vk+1 = proxαxg(xk −αx∇f(xk)), xk+1 = zk+1, if F(zk+1) ≤F(vk+1), vk+1, otherwise. end if tk+1 = √ 4(tk)2+1+1 2 , qk+1 = ηqk + 1, ck+1 = ηqkck+F (xk+1) qk+1 . end for 2. if Ω1 and Ω2 are both infinite, then for any accumulation point x∗of {xkj+1}, y∗of {ykj} where kj ∈Ω1 and any accumulation point v∗of {vmj+1}, x∗of {xmj} where mj ∈Ω2, we have 0 ∈∂F(x∗), 0 ∈∂F(y∗) and 0 ∈∂F(v∗). 4 Numerical Results In this section, we test the performance of our algorithm on the problem of Sparse Logistic Regression (LR)7.Sparse LR is an attractive extension to LR as it can reduce overfitting and perform feature selection simultaneously. Sparse LR is widely used in areas such as bioinformatics [25] and text categorization [26]. In this subsection, we follow Gong et al. [19] to consider Sparse LR with a nonconvex regularizer: min w 1 n n X i=1 log(1 + exp(−yixT i w)) + r(w). (26) We choose r(w) as the capped l1 penalty [3], defined as r(w) = λ d X i=1 min(|wi|, θ), θ > 0. (27) We compare monotone APG (mAPG) and nonmonotone APG (nmAPG) with monotone GIST8 (mGIST), nonmonotone GIST (nmGIST) [19] and IFB [22]. We test the performance on the real-sim data set9, which contains 72309 samples of 20958 dimensions. We follow [19] to set λ = 0.0001, θ = 0.1λ and the starting point as zero vectors. In nmAPG we set η = 0.8. In IFB the inertial parameter β is set at 0.01 and the Lipschitz constant is computed by backtracking. To make a fair comparison, we first run mGIST. The algorithm is terminated when the relative change of two consecutive objective function values is less than 10−5 or the number of iterations exceeds 1000. This termination condition is the same as in [19]. Then we run nmGIST, mAPG, nmAPG and IFB. These four algorithms are terminated when they achieve an equal or smaller objective function value than that by mGIST or the number of iterations exceeds 1000. We randomly choose 90% of the data as training data and the rest as test data. The experiment result is averaged over 10 runs. All algorithms are run on Matlab 2011a and Windows 7 with an Intel Core i3 2.53 GHz CPU and 4GB memory. The result is reported in Table 2. We also plot the curves of objective function values vs. iteration number and CPU time in Figure 1. 7For the sake of space limitation we leave another experiment, Sparse PCA, in Supplementary Materials. 8http://www.public.asu.edu/ yje02/Software/GIST 9http://www.csie.ntu.tw/cjlin/libsvmtools/datasets 7 Table 2: Comparisons of APG, GIST and IFB on the sparse logistic regression problem. The quantities include number of iterations, averaged number of line searches in each iteration, computing time (in seconds) and test error. They are averaged over 10 runs. Method #Iter. #Line search Time test error mGIST 994 2.19 300.42 2.94% nmGIST 806 1.69 222.22 2.94% IFB 635 2.59 215.82 2.96% mAPG 175 2.99 133.23 2.93% nmAPG 146 1.01 42.99 2.97% We have the following observations: (1) APG-type methods need much fewer iterations and less computing time than GIST and IFB to reach the same (or smaller) objective function values. As GIST is indeed a Proximal Gradient method (PG) and IFB is an extension of PG, this verifies that APG can indeed accelerate the convergence in practice. (2) nmAPG is faster than mAPG. We give two reasons: nmAPG avoids the computation of vk in most of the time and reduces the number of line searches in each iteration. We mention that in mAPG line search is performed in both (11) and (12), while in nmAPG only the computation of zk+1 needs line search in every iteration. vk+1 is computed only when necessary. We note that the average number of line searches in nmAPG is nearly one. This means that (22) holds in most of the time. So we can trust that zk can work well in most of the time and only in a few times vk is computed to correct zk and yk. On the other hand, nonmonotonicity allows for larger stepsizes, which results in fewer line searches. 0 200 400 600 800 1000 −2.6 −2.4 −2.2 −2 −1.8 −1.6 −1.4 −1.2 −1 −0.8 Iteration Function Value mGIST nmGIST IFB mAPG nmAPG 0 50 100 150 200 250 300 −2.6 −2.4 −2.2 −2 −1.8 −1.6 −1.4 −1.2 −1 −0.8 CPU Time Function Value mGIST nmGIST IFB mAPG nmAPG (a) Objective function value v.s. iteration (b) Objective function value v.s. time Figure 1: Compare the objective function value produced by APG, GIST and IFB. 5 Conclusions In this paper, we propose two APG-type algorithms for efficiently solving general nonconvex nonsmooth problems, which are abundant in machine learning. We provide a detailed convergence analysis, showing that every accumulation point is a critical point for general nonconvex nonsmooth programs and the convergence rate is maintained at O 1 k2 for convex programs. Nonmonotone APG allows for larger stepsizes and needs less computation cost in each iteration and thus is faster than monotone APG in practice. Numerical experiments testify to the advantage of the two algorithms. Acknowledgments Zhouchen Lin is supported by National Basic Research Program of China (973 Program) (grant no. 2015CB352502), National Natural Science Foundation (NSF) of China (grant nos. 61272341 and 61231002), and Microsoft Research Asia Collaborative Research Program. He is the corresponding author. 8 References [1] F. Yun, editor. Low-rank and sparse modeling for visual analysis. Springer, 2014. 1 [2] E.J. Candes, M.B. Wakin, and S.P. Boyd. Enhancing sparsity by reweighted l1 minimization. Journal of Fourier Analysis and Applications, 14(5):877–905, 2008. 1 [3] T. Zhang. Analysis of multi-stage convex relaxation for sparse regularization. The Journal of Machine Learning Rearch, 11:1081–1107, 2010. 1, 7 [4] S. Foucart and M.J. Lai. Sparsest solutions of underdeterminied linear systems via lq minimization for 0 < q ≤1. Applied and Computational Harmonic Analysis, 26(3):395–407, 2009. 1 [5] C.H. Zhang. Nearly unbiased variable selection under minimax concave penalty. The Annals of Statistics, 38(2):894–942, 2010. 1 [6] D. Geman and C. Yang. Nonlinear image recovery with half-quadratic regularization. IEEE Transactions on Image Processing, 4(7):932–946, 1995. 1 [7] J. Fan and R. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360, 2001. 1 [8] K. Mohan and M. Fazel. Iterative reweighted algorithms for matrix rank minimization. The Journal of Machine Learning Research, 13(1):3441–3473, 2012. 1, 2 [9] Y.E. Nesterov. A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). Soviet Mathematics Doklady, 27(2):372–376, 1983. 1 [10] Y.E. Nesterov. Smooth minimization of nonsmooth functions. Mathematical programming, 103(1):127– 152, 2005. 1 [11] Y.E. Nesterov. Gradient methods for minimizing composite objective functions. Technical report, Center for Operations Research and Econometrics(CORE), Catholie University of Louvain, 2007. 1 [12] A. Beck and M. Teboulle. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Transactions on Image Processing, 18(11):2419–2434, 2009. 1, 2, 3, 4, 5 [13] A. Beck and M. Teboulle. A fast iterative shrinkage thresholding algorithm for linear inverse problems. SIAM J. Imaging Sciences, 2(1):183–202, 2009. 1, 2, 3, 4, 5 [14] P. Tseng. On accelerated proximal gradient methods for convex-concave optimization. Technical report, University of Washington, Seattle, 2008. 1 [15] S. Ghadimi and G. Lan. Accelerated gradient methods for nonconvex nonlinear and stochastic programming. arXiv preprint arXiv:1310.3787, 2013. 1, 2 [16] H. Attouch, J. Bolte, and B.F. Svaier. Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods. Mathematical Programming, 137:91–129, 2013. 2, 4, 5 [17] P. Frankel, G. Garrigos, and J. Peypouquet. Splitting methods with variable metric for KurdykaŁojasiewicz functions and general convergence rates. Journal of Optimization Theory and Applications, 165:874–900, 2014. 2, 5 [18] P. Ochs, Y. Chen, T. Brox, and T. Pock. IPiano: Inertial proximal algorithms for nonconvex optimization. SIAM J. Image Sciences, 7(2):1388–1419, 2014. 2 [19] P. Gong, C. Zhang, Z. Lu, J. Huang, and J. Ye. A general iterative shrinkage and thresholding algorithm for nonconvex regularized optimization problems. In ICML, pages 37–45, 2013. 2, 7 [20] W. Zhong and J. Kwok. Gradient descent with proximal average for nonconvex and composite regularization. In AAAI, 2014. 2 [21] P. Ochs, A. Dosovitskiy, T. Brox, and T. Pock. On iteratively reweighted algorithms for non-smooth non-convex optimization in computer vision. SIAM J. Imaging Sciences, 2014. 2 [22] R.L. Bot, E.R. Csetnek, and S. L´aszl´o. An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions. Preprint, 2014. 2, 7 [23] J. Bolte, S. Sabach, and M. Teboulle. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Mathematical Programming, 146(1-2):459–494, 2014. 3, 5 [24] H. Zhang and W.W. Hager. A nonmonotone line search technique and its application to unconstrained optimization. SIAM J. Optimization, 14:1043–1056, 2004. 5, 6 [25] S.K. Shevade and S.S. Keerthi. A simple and efficient algorithm for gene selection using sparse logistic regression. Bioinformatics, 19(17):2246–2253, 2003. 7 [26] A. Genkin, D.D. Lewis, and D. Madigan. Large-scale bayesian logistic regression for text categorization. Technometrics, 49(14):291–304, 2007. 7 9 | 2015 | 390 |
5,914 | Monotone k-Submodular Function Maximization with Size Constraints Naoto Ohsaka The University of Tokyo ohsaka@is.s.u-tokyo.ac.jp Yuichi Yoshida National Institute of Informatics, and Preferred Infrastructure, Inc. yyoshida@nii.ac.jp Abstract A k-submodular function is a generalization of a submodular function, where the input consists of k disjoint subsets, instead of a single subset, of the domain. Many machine learning problems, including influence maximization with k kinds of topics and sensor placement with k kinds of sensors, can be naturally modeled as the problem of maximizing monotone k-submodular functions. In this paper, we give constant-factor approximation algorithms for maximizing monotone ksubmodular functions subject to several size constraints. The running time of our algorithms are almost linear in the domain size. We experimentally demonstrate that our algorithms outperform baseline algorithms in terms of the solution quality. 1 Introduction The task of selecting a set of items subject to constraints on the size or the cost of the set is versatile in machine learning problems. The objective can be often modeled as maximizing a function with the diminishing return property, where for a finite set V , a function f : 2V →R satisfies the diminishing return property if f(S ∪{e}) −f(S) ≥f(T ∪{e}) −f(T) for any S ⊆T and e ∈V \ T. For example, sensor placement [13, 14], influence maximization in social networks [11], document summarization [15], and feature selection [12] involve objectives satisfying the diminishing return property. It is well known that the diminishing return property is equivalent to submodularity, where a function f : 2V →R is submodular if f(S) + f(T) ≥f(S ∩T) + f(S ∪T) holds for any S, T ⊆V . When the objective function is submodular and hence satisfies the diminishing return property, we can find in polynomial time a solution with a provable guarantee on its solution quality even with various constraints [2, 3, 18, 21]. In many practical applications, however, we want to select several disjoint sets of items instead of a single set. To see this, let us describe two examples: Influence maximization: Viral marketing is a cost-effective marketing strategy that promotes products by giving free (or discounted) items to a selected group of highly influential people in the hope that, through the word-of-mouth effects, a large number of product adoptions will occur [4, 19]. Suppose that we have k kinds of items, each having a different topic and thus a different word-ofmouth effect. Then, we want to distribute these items to B people selected from a group V of n people so as to maximize the (expected) number of product adoptions. It is natural to impose a constraint that each person can receive at most one item since giving many free items to one particular person would be unfair. Sensor placement: There are k kinds of sensors for different measures such as temperature, humidity, and illuminance. Suppose that we have Bi many sensors of the i-th kind for each 1 i ∈{1, 2, . . . , k}, and there is a set V of n locations, each of which can be instrumented with exactly one sensor. Then, we want to allocate those sensors so as to maximize the information gain. When k = 1, these problems can be modeled as maximizing monotone submodular functions [11, 14] and admit polynomial-time (1 −1/e)-approximation [18]. Unfortunately, however, the case of general k cannot be modeled as maximizing submodular functions, and we cannot apply the methods in the literature on maximizing submodular functions [2, 3, 18, 21]. We note that the problem of selecting k disjoint sets can be sometimes modeled as maximizing monotone submodular functions over the extended domain k × V subject to a partition matroid. Although (1 −1/e)-approximation algorithms are known [3, 5], the running time is around O(k8n8) and is prohibitively slow. Our contributions: To address the problem of selecting k disjoint sets, we use the fact that the objectives can be often modeled as k-submodular functions. Let (k + 1)V := {(X1, . . . , Xk) | Xi ⊆V ∀i ∈{1, 2, . . . , k}, Xi ∩Xj = ∅∀i ̸= j} be the family of k disjoint sets. Then, a function f : (k + 1)V →R is called k-submodular [9] if, for any x = (X1, . . . , Xk) and y = (Y1, . . . , Yk) in (k + 1)V , we have f(x) + f(y) ≥f(x ⊔y) + f(x ⊓y) where x ⊓y := (X1 ∩Y1, . . . , Xk ∩Yk), x ⊔y := X1 ∪Y1 \ [ i̸=1 Xi ∪Yi , . . . , Xk ∪Yk \ [ i̸=k Xi ∪Yi . Roughly speaking, k-submodularity captures the property that, if we choose exactly one set Xe ∈ {X1, . . . , Xk} that an element e can belong to for each e ∈V , then the resulting function is submodular (see Section 2 for details). When k = 1, k-submodularity coincides with submodularity. In this paper, we give approximation algorithms for maximizing non-negative monotone ksubmodular functions with several constraints on the sizes of the k sets. Here, we say that f is monotone if f(x) ≤f(y) for any x = (X1, . . . , Xk) and y = (Y1, . . . , Yk) with Xi ⊆Yi for each i ∈{1, . . . , k}. Let n = |V | be the size of the domain. For the total size constraint, under which the total size of the k sets is bounded by B ∈Z+, we show that a simple greedy algorithm outputs 1/2-approximation in O(knB) time. The approximation ratio of 1/2 is asymptotically tight since the lower bound of k+1 2k + ϵ for any ϵ > 0 is known even when B = n [10]. Combining the random sampling technique [17], we also give a randomized algorithm that outputs 1/2-approximation with probability at least 1 −δ in O(kn log B log(B/δ)) time. Hence, even when B is as large as n, the running time is almost linear in n. For the individual size constraint, under which the size of the i-th set is bounded by Bi ∈Z+ for each i ∈{1, . . . , k}, we give a 1/3-approximation algorithm with running time O(knB), where B = Pk i=1 Bi. We then give a randomized algorithm that outputs 1/3-approximation with probability at least 1 −δ in O(k2n log(B/k) log(B/δ)) time. To show the practicality of our algorithms, we apply them to the influence maximization problem and the sensor placement problem, and we demonstrate that they outperform previous methods based on submodular function maximization and several baseline methods in terms of the solution quality. Related work: When k = 2, k-submodularity is called bisubmodularity, and [20] applied bisubmodular functions to machine learning problems. However, their algorithms do not have any approximation guarantee. Huber and Kolmogorov introduced k-submodularity as a generalization of submodularity and bisubmodularity [9], and minimizing k-submodular functions was successfully used in a computer vision application [8]. Iwata et al. [10] gave a 1/2-approximation algorithm and a k 2k−1-approximation algorithm for maximizing non-monotone and monotone k-submodular functions, respectively, when there is no constraint. Organization: The rest of this paper is organized as follows. In Section 2, we review properties of k-submodular functions. Sections 3 and 4 are devoted to show 1/2-approximation algorithms for the total size constraint, and 1/3-approximation algorithms for the individual size constraint, respectively. We show our experimental results in Section 5. We conclude our paper in Section 6. 2 Algorithm 1 k-Greedy-TS Input: a monotone k-submodular function f : (k + 1)V →R+ and an integer B ∈Z+. Output: a vector s with |supp(s)| = B. 1: s ←0. 2: for j = 1 to B do 3: (e, i) ←arg maxe∈V \supp(s),i∈[k] ∆e,if(s). 4: s(e) ←i. 5: return s. 2 Preliminaries For an integer k ∈N, [k] denotes the set {1, 2, . . . , k}. We define a partial order ⪯on (k + 1)V so that, for x = (X1, . . . , Xk) and y = (Y1, . . . , Yk) in (k + 1)V , x ⪯y if Xi ⊆Yi for every i with i ∈[k]. We also define ∆e,if(x) = f(X1, . . . , Xi−1, Xi ∪{e}, Xi+1, . . . , Xk) −f(X1, . . . , , Xk) for x ∈(k + 1)V , e /∈S ℓ∈[k] Xℓ, and i ∈[k], which is the marginal gain when adding e to the i-th set of x. Then, it is easy to see the monotonicity of f is equivalent to ∆e,if(x) ≥0 for any x = (X1, . . . , Xk) and e ̸∈S ℓ∈[k] Xℓand i ∈[k]. Also it is not hard to show (see [22] for details) that the k-submodularity of f implies the orthant submodularity, i.e., ∆e,if(x) ≥∆e,if(y) for any x, y ∈(k + 1)V with x ⪯y, e /∈S ℓ∈[k] Yℓ, and i ∈[k], and the pairwise monotonicity, i.e., ∆e,if(x) + ∆e,jf(x) ≥0 for any x ∈(k + 1)V , e /∈S ℓ∈[k] Xℓ, and i, j ∈[k] with i ̸= j. Actually, the converse holds: Theorem 2.1 (Ward and ˇZivn´y [22]). A function f : (k + 1)V →R is k-submodular if and only if f is orthant submodular and pairwise monotone. It is often convenient to identify (k + 1)V with {0, 1 . . . , k}V to analyze k-submodular functions, Namely, we associate (X1, . . . , Xk) ∈(k + 1)V with x ∈{0, 1, . . . , k}V by Xi = {e ∈V | x(e) = i} for i ∈[k]. Hence we sometimes abuse notation, and simply write x = (X1, . . . , Xk) by regarding a vector x as disjoint k sets of V . We define the support of x ∈{0, 1, . . . , k}V as supp(x) = {e ∈V | x(e) ̸= 0}. Analogously, for x ∈{0, 1, . . . , k}V and i ∈[k], we define suppi(x) = {e ∈V | x(e) = i}. Let 0 be the zero vector in {0, 1, . . . , k}V . 3 Maximizing k-submodular Functions with the Total Size Constraint In this section, we give a 1/2-approximation algorithm to the problem of maximizing monotone k-submodular functions subject to the total size constraint. Namely, we consider max f(x) subject to |supp(x)| ≤B and x ∈(k + 1)V , where f : (k + 1)V →R+ is monotone k-submodular and B ∈Z+ is a non-negative integer. 3.1 A greedy algorithm The first algorithm we propose is a simple greedy algorithm (Algorithm 1). We show the following: Theorem 3.1. Algorithm 1 outputs a 1/2-approximate solution by evaluating f O(knB) times, where n = |V |. The number of evaluations of f is clearly O(knB). Hence in what follows, we focus on analyzing the approximation ratio of Algorithm 1. Our analysis is based on the framework of [10]. Consider the j-th iteration of the for loop from Line 2. Let (e(j), i(j)) ∈V × [k] be the pair greedily chosen in this iteration, and let s(j) be the solution after this iteration. We define s(0) = 0. Let o be 3 Algorithm 2 k-Stochastic-Greedy-TS Input: a monotone k-submodular function f : (k + 1)V →R+, an integer B ∈Z+, and a failure probability δ > 0. Output: a vector s with |supp(s)| = B. 1: s ←0. 2: for j = 1 to B do 3: R ←a random subset of size min{ n−j+1 B−j+1 log B δ , n} uniformly sampled from V \ supp(s). 4: (e, i) ←arg maxe∈R,i∈[k] ∆e,if(s). 5: s(e) ←i. 6: return s. the optimal solution. We iteratively define o(0) = o, o(1), . . . , o(B) as follows. For each j ∈[B], let S(j) = supp(o(j−1)) \ supp(s(j−1)). Then, we set o(j) = e(j) if e(j) ∈S(j), and set o(j) to be an arbitrary element in S(j) otherwise. Then, we define o(j−1/2) as the resulting vector obtained from o(j−1) by assigning 0 to the o(j)-th element, and then define o(j) as the resulting vector obtained from o(j−1/2) by assigning i(j) to the e(j)-th element. Note that supp(o(j)) = B holds for every j ∈{0, 1, . . . , B} and o(B) = s(B) = s. Moreover, we have s(j−1) ⪯o(j−1/2) for every j ∈[B]. Proof of Theorem 3.1. We first show that, for each j ∈[B], f(s(j)) −f(s(j−1)) ≥f(o(j−1)) −f(o(j)). (1) For each j ∈[B], let y(j) = ∆e(j),i(j)f(s(j−1)), a(j−1/2) = ∆o(j),o(j−1)(o(j))f(o(j−1/2)), and a(j) = ∆e(j),i(j)f(o(j−1/2)). Then, note that f(s(j))−f(s(j−1)) = y(j), and f(o(j−1))−f(o(j)) = a(j−1/2) −a(j). From the monotonicity of f, it suffices to show that y(j) ≥a(j−1/2). Since e(j) and i(j) are chosen greedily, we have y(j) ≥∆o(j),o(j−1)(o(j))f(s(j−1)). Since s(j−1) ⪯o(j−1/2), we have ∆o(j),o(j−1)(o(j))f(s(j−1)) ≥a(j−1/2) from the orthant submodularity. Combining these two inequalities, we establish (1). Then, we have f(o) −f(s) = B X j=1 (f(o(j−1)) −f(o(j))) ≤ B X j=1 (f(s(j)) −f(s(j−1))) = f(s) −f(0) ≤f(s), which implies f(s) ≥f(o)/2. 3.2 An almost linear-time algorithm by random sampling In this section, we improve the number of evaluations of f from O(knB) to O(kn log B log B δ ), where δ > 0 is a failure probability. Our algorithm is shown in Algorithm 2. The main difference from Algorithm 1 is that we sample a sufficiently large subset R of V , and then greedily assign a value only looking at elements in R. We reuse notations e(j), i(j), S(j) and s(j) from Section 3.1, and let R(j) be R in the j-th iteration. We iteratively define o(0) = o, o(1), . . . , o(B) as follows. If R(j)∩S(j) is empty, then we regard that the algorithm failed. Suppose R(j)∩S(j) is non-empty. Then, we set o(j) = e(j) if e(j) ∈R(j)∩S(j), and set o(j) to be an arbitrary element in R(j) ∩S(j) otherwise. Finally, we define o(j−1/2) and o(j) as in Section 3.1 using o(j−1), o(j), and e(j). If the algorithm does not fail and o(1), . . . , o(B) are well defined, or in other words, if R(j) ∩S(j) is not empty for every j ∈[B], then the rest of the analysis is completely the same as in Section 3.1, and we achieve an approximation ratio of 1/2. Hence, it suffices to show that o(1), . . . , o(B) are well defined with a high probability. Lemma 3.2. With probability at least 1 −δ, we have R(j) ∩S(j) ̸= ∅for every j ∈[B]. 4 Algorithm 3 k-Greedy-IS Input: a monotone k-submodular function f : (k + 1)V →R+ and integers B1, . . . , Bk ∈Z+. Output: a vector s with |suppi(s)| = Bi for each i ∈[k]. 1: s ←0 and B ←P i∈[k] Bi. 2: for j = 1 to B do 3: I ←{i ∈[k] | suppi(s) < Bi}. 4: (e, i) ←arg maxe∈V \supp(s),i∈I ∆e,if(s). 5: s(e) ←i. 6: return s. Proof. Fix j ∈[B]. If |R(j)| = n, then we clealy have Pr[R(j) ∩S(j) = ∅] = 0. Otherwise we have Pr[R(j) ∩S(j) = ∅] = 1 − |S(j)| |V \ supp(s(j−1))| |R(j)| ≤e−B−j+1 n−j+1 n−j+1 B−j+1 log B δ = δ B . By the union bound over j ∈[B], the lemma follows. Theorem 3.3. Algorithm 2 outputs a 1/2-approximate solution with probability at least 1 −δ by evaluating f at most O(k(n −B) log B log B δ ) times. Proof. By Lemma 3.2 and the analysis in Section 3.1, Algorithm 2 outputs a 1/2-approximate solution with probability at least 1 −δ. The number of evaluations of f is at most k X j∈[B] n −j + 1 B −j + 1 log B δ = k X j∈[B] n −B + j j log B δ = O kn log B log B δ . 4 Maximizing k-submodular Functions with the Individual Size Constraint In this section, we consider the problem of maximizing monotone k-submodular functions subject to the individual size constraint. Namely, we consider max f(x) subject to |suppi(x)| ≤Bi ∀i ∈[k] and x ∈(k + 1)V , where f : (k + 1)V →R+ is monotone k-submodular, and B1, . . . , Bk ∈Z+ are non-negative integers. 4.1 A greedy algorithm We first consider a simple greedy algorithm described in Algorithm 3. We show the following: Theorem 4.1. Algorithm 3 outputs a 1/3-approximate solution by evaluating f at most O(knB) times. It is clear that the number of evaluations of f is O(knB). The analysis of the approximation ratio is given in Appendix A. 4.2 An almost linear-time algorithm by random sampling We next improve the number of evaluations of f from O(knB) to O k2n log B k log B δ . Our algorithm is given in Algorithm 4. In Appendix A, we show the following. Theorem 4.2. Algorithm 4 outputs a 1/3-approximate solution with probability at least 1 −δ by evaluating f at most O k2n log B k log B δ times. 5 Algorithm 4 k-Stochastic-Greedy-IS Input: a monotone k-submodular function f : (k + 1)V →R+, integers B1, . . . , Bk ∈Z+, and a failure probability δ > 0. Output: a vector s with |suppi(s)| = Bi for each i ∈[k]. 1: s ←0 and B ←P i∈[k] Bi. 2: for j = 1 to B do 3: I ←{i ∈[k] | suppi(s) < Bi} and R ←∅. 4: loop 5: Add a random element in V \ (supp(s) ∪R) to R. 6: (e, i) ←arg maxe∈R,i∈I ∆e,if(s). 7: if |R| ≥min{ n−|suppi(s)| Bi−|suppi(s)| log B δ , n} then 8: s(e) ←i. 9: break the loop. 10: return s 5 Experiments In this section, we experimentally demonstrate that our algorithms outperform baseline algorithms and our almost linear-time algorithms significantly improve efficiency in practice. We conducted experiments on a Linux server with Intel Xeon E5-2690 (2.90 GHz) and 264GB of main memory. We implemented all algorithms in C++. We measured the computational cost in terms of the number of function evaluations so that we can compare the efficiency of different methods independently from concrete implementations. 5.1 Influence maximization with k topics under the total size constraint We first apply our algorithms to the problem of maximizing the spread of influence on several topics. First we describe our information diffusion model, called the k-topic independent cascade (k-IC) model, which generalizes the independent cascade model [6, 7]. In the k-IC model, there are k kinds of items, each having a different topic, and thus k kinds of rumors independently spread through a social network. Let G = (V, E) be a social network with an edge probability pi u,v for each edge (u, v) ∈E, representing the strength of influence from u to v on the i-th topic. Given a seed s ∈(k + 1)V , for each i ∈[k], the diffusion process of the rumor about the i-th topic starts by activating vertices in suppi(s), independently from other topics. Then the process unfolds in discrete steps according to the following randomizes rule: When a vertex u becomes active in the step t for the first time, it is given a single chance to activate each current inactive vertex v. It succeeds with probability pi u,v. If u succeeds, then v becomes active in the step t + 1. Whether or not u succeeds, it cannot make any further attempt to activate v in subsequent steps. The process runs until no more activation is possible. The influence spread σ : (k + 1)V →R+ in the k-IC model is defined as the expected total number of vertices who eventually become active in one of the k diffusion processes given a seed s, namely, σ(s) = E h | S i∈[k] Ai(suppi(s))| i , where Ai(suppi(s)) is a random variable representing the set of activated vertices in the diffusion process of the i-th topic. Given a directed graph G = (V, E), edge probabilities pi u,v ((u, v) ∈E, i ∈[k]), and a budget B, the problem is to select a seed s ∈(k +1)V that maximizes σ(s) subject to |supp(s)| ≤B. It is easy to see that the influence spread function σ is monotone k-submodular (see Appendix B for the proof). Experimental settings: We use a publicly available real-world dataset of a social news website Digg.1 This dataset consists of a directed graph where each vertex represents a user and each edge represents the friendship between a pair of users, and a log of user votes for stories. We set the number of topics k to be 10, and estimated edge probabilities on each topic from the log using the method of [1]. We set the value of B to 5, 10, . . . , 100 and compared the following algorithms: 1http://www.isi.edu/˜lerman/downloads/digg2009.html 6 k-Greedy-TS k-Stochastic-Greedy-TS Single(3) Degree Random 0 50 100 150 200 250 300 350 0 20 40 60 80 100 Influence Spread Budget Figure 1: Comparison of influence spreads. 0 10000 20000 30000 40000 50000 60000 70000 0 20 40 60 80 100 # of Evaluations Budget Figure 2: The number of influence estimations. • k-Greedy-TS (Algorithm 1). • k-Stochastic-Greedy-TS (Algorithm 2). We chose δ = 0.1. • Single(i): Greedily choose B vertices only considering the i-th topic and assign them items of the i-th topic. • Degree: Choose B vertices in decreasing order of degrees and assign them items of random topics. • Random: Randomly choose B vertices and assign them items of random topics. For the first three algorithms, we implemented the lazy evaluation technique [16] for efficiency. For k-Greedy-TS, we maintain an upper bound on the gain of inserting each pair (e, i) to apply the lazy evaluation technique directly. For k-Stochastic-Greedy-TS, we maintain an upper bound on the gain for each pair (e, i), and we pick up a pair in R with the largest gain for each iteration. During the process of the algorithms, the influence spread was approximated by simulating the diffusion process 100 times. When the algorithms terminate, we simulated the diffusion process 10,000 times to obtain sufficiently accurate estimates of the influence spread. Results: Figure 1 shows the influence spread achieved by each algorithm. We only show Single(3) among Single(i) strategies since its influence spread is the largest. k-Greedy-TS and kStochastic-Greedy-TS clearly outperform the other methods owing to their theoretical guarantee on the solution quality. Note that our two methods simulated the diffusion process 100 times to choose a seed set, which is relatively small, because of the high computation cost. Consequently, the approximate value of the influence spread has a relatively high variance, and this might have caused the greedy method to choose seeds with small influence spreads. Remark that Single(3) works worse than Degree for B larger than 35, which means that focusing on a single topic may significantly degrade the influence spread. Random shows a poor performance as expected. Figure 2 reports the number of influence estimations of greedy algorithms. We note that kStochastic-Greedy-TS outperforms k-Greedy-TS, which implies that the random sampling technique is effective even when combined with the lazy evaluation technique. The number of evaluations of k-Greedy-TS drastically increases when B is around 40 since we run out of influential vertices and we need to reevaluate the remaining vertices. Indeed, the slope of k-Greedy-TS after B = 40 is almost constant in Figure 1, which indicates that the remaining vertices have a similar influence. Single(3) is faster than our algorithms since it only considers a single topic. 5.2 Sensor placement with k kinds of measures under the individual size constraint Next we apply our algorithms for maximizing k-submodular functions with the individual size constraint to the sensor placement problem that allows multiple kinds of sensors. In this problem, we want to determine the placement of multiple kinds of sensors that most effectively reduces the expected uncertainty. We need several notions to describe our model. Let Ω= {X1, X2, . . . , Xn} be a set of discrete random variables. The entropy of a subset S of Ωis defined as H(S) = −P s∈dom S Pr[s] log Pr[s]. The conditional entropy of Ωhaving observed S is H(Ω| S) := H(Ω) −H(S). Hence, in order to reduce the uncertainty of Ω, we want to find a set S of as a large entropy as possible. Now we formalize the sensor placement problem. There are k kinds of sensors for different measures. Suppose that we want to allocate Bi many sensors of the i-th kind for each i ∈[k], and there 7 k-Greedy-IS k-Stochastic-Greedy-IS Single(1) Single(2) Single(3) 4 5 6 7 8 9 10 11 0 2 4 6 8 10 12 14 16 18 Entropy Value of b Figure 3: Comparison of entropy. 0 200 400 600 800 1000 1200 1400 1600 1800 0 2 4 6 8 10 12 14 16 18 # of Evaluations Value of b Figure 4: The number of entropy evaluations. are set V of n locations, each of which can be instrumented with exactly one sensor. Let Xi e be the random variable representing the observation collected from a sensor of the i-th kind if it is installed at the e-th location, and let Ω= {Xi e}i∈[k],e∈V . Then, the problem is to select x ∈(k + 1)V that maximizes f(x) = H S e∈supp(x){Xx(e) e } subject to |suppi(x)| ≤Bi for each i ∈[k]. It is easy to see that f is monotone k-submodular (see Appendix B for details). Experimental settings: We use the publicly available Intel Lab dataset.2 This dataset contains a log of approximately 2.3 million readings collected from 54 sensors deployed in the Intel Berkeley research lab between February 28th and April 5th, 2004. We extracted temperature, humidity, and light values from each reading and discretized these values into several bins of 2 degrees Celsius each, 5 points each, and 100 luxes each, respectively. Hence there are k = 3 kinds of sensors to be allocated to n = 54 locations. Budgets for sensors measuring temperature, humidity, and light are denoted by B1, B2, and B3. We set B1 = B2 = B3 = b, where b is a parameter varying from 1 to 18. We compare the following algorithms: • k-Greedy-IS (Algorithm 3). • k-Stochastic-Greedy-IS (Algorithm 4). We chose δ = 0.1. • Single(i): Allocate sensors of the i-th kind to greedily chosen P j Bj places. We again implemented these algorithms with the lazy evaluation technique in a similar way to the previous experiment. Also note that Single(i) strategies do not satisfy the individual size constraint. Results: Figure 3 shows the entropy achieved by each algorithm. k-Greedy-IS and k-StochasticGreedy-IS clearly outperform Single(i) strategies. The maximum gap of entropies achieved by k-Greedy-IS and k-Stochastic-Greedy-IS is only 0.18%. Figure 4 shows the number of entropy evaluations of each algorithm. We observe that k-StochasticGreedy-IS outperforms k-Greedy-IS. Especially when b = 18, the number of entropy evaluations is reduced by 31%. Single(i) strategies are faster because they only consider sensors of a fixed kind. 6 Conclusions Motivated by real-world applications, we proposed approximation algorithms for maximizing monotone k-submodular functions. Our algorithms run in almost linear time and achieve the approximation ratio of 1/2 for the total size constraint and 1/3 for the individual size constraint. We empirically demonstrated that our algorithms outperform baseline methods for maximizing submodular functions in terms of the solution quality. Improving the approximation ratio of 1/3 for the individual size constraint or showing it tight is an interesting open problem. Acknowledgments Y. Y. is supported by JSPS Grant-in-Aid for Young Scientists (B) (No. 26730009), MEXT Grantin-Aid for Scientific Research on Innovative Areas (24106003), and JST, ERATO, Kawarabayashi Large Graph Project. N. O. is supported by JST, ERATO, Kawarabayashi Large Graph Project. 2http://db.csail.mit.edu/labdata/labdata.html 8 References [1] N. Barbieri, F. Bonchi, and G. Manco. Topic-aware social influence propagation models. In ICDM, pages 81–90, 2012. [2] N. Buchbinder, M. Feldman, J. S. Naor, and R. Schwartz. A tight linear time (1/2)approximation for unconstrained submodular maximization. In FOCS, pages 649–658, 2012. [3] G. Calinescu, C. Chekuri, M. P´al, and J. Vondr´ak. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740–1766, 2011. [4] P. Domingos and M. Richardson. Mining the network value of customers. In KDD, pages 57–66, 2001. [5] Y. Filmus and J. Ward. Monotone submodular maximization over a matroid via non-oblivious local search. SIAM Journal on Computing, 43(2):514–542, 2014. [6] J. Goldenberg, B. Libai, and E. Muller. Talk of the network: A complex systems look at the underlying process of word-of-mouth. Marketing Letters, 12(3):211–223, 2001. [7] J. Goldenberg, B. Libai, and E. Muller. Using complex systems analysis to advance marketing theory development: Modeling heterogeneity effects on new product growth through stochastic cellular automata. Academy of Marketing Science Review, 9(3):1–18, 2001. [8] I. Gridchyn and V. Kolmogorov. Potts model, parametric maxflow and k-submodular functions. In ICCV, pages 2320–2327, 2013. [9] A. Huber and V. Kolmogorov. Towards minimizing k-submodular functions. In Combinatorial Optimization, pages 451–462. Springer Berlin Heidelberg, 2012. [10] S. Iwata, S. Tanigawa, and Y. Yoshida. Improved approximation algorithms for k-submodular function maximization. In SODA, 2016. to appear. [11] D. Kempe, J. Kleinberg, and ´E. Tardos. Maximizing the spread of influence through a social network. In KDD, pages 137–146, 2003. [12] C.-W. Ko, J. Lee, and M. Queyranne. An exact algorithm for maximum entropy sampling. Operations Research, 43(4):684–691, 1995. [13] A. Krause, H. B. McMahon, C. Guestrin, and A. Gupta. Robust submodular observation selection. The Journal of Machine Learning Research, 9:2761–2801, 2008. [14] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. The Journal of Machine Learning Research, 9:235–284, 2008. [15] H. Lin and J. Bilmes. Multi-document summarization via budgeted maximization of submodular functions. In NAACL/HLT, pages 912–920, 2010. [16] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. Optimization Techniques, 7:234–243, 1978. [17] B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi, J. Vondr´ak, and A. Krause. Lazier than lazy greedy. In AAAI, pages 1812–1818, 2015. [18] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functions—I. Mathematical Programming, 14(1):265–294, 1978. [19] M. Richardson and P. Domingos. Mining knowledge-sharing sites for viral marketing. In KDD, pages 61–70, 2002. [20] A. P. Singh, A. Guillory, and J. A. Bilmes. On bisubmodular maximization. In AISTATS, pages 1055–1063, 2012. [21] M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters, 32(1):41–43, 2004. [22] J. Ward and S. ˇZivn´y. Maximizing k-submodular functions and beyond. arXiv:1409.1399v1, 2014, A preliminary version appeared in SODA, pages 1468–1481, 2014. 9 | 2015 | 391 |
5,915 | Spherical Random Features for Polynomial Kernels Jeffrey Pennington Felix X. Yu Sanjiv Kumar Google Research {jpennin, felixyu, sanjivk}@google.com Abstract Compact explicit feature maps provide a practical framework to scale kernel methods to large-scale learning, but deriving such maps for many types of kernels remains a challenging open problem. Among the commonly used kernels for nonlinear classification are polynomial kernels, for which low approximation error has thus far necessitated explicit feature maps of large dimensionality, especially for higher-order polynomials. Meanwhile, because polynomial kernels are unbounded, they are frequently applied to data that has been normalized to unit ℓ2 norm. The question we address in this work is: if we know a priori that data is normalized, can we devise a more compact map? We show that a putative affirmative answer to this question based on Random Fourier Features is impossible in this setting, and introduce a new approximation paradigm, Spherical Random Fourier (SRF) features, which circumvents these issues and delivers a compact approximation to polynomial kernels for data on the unit sphere. Compared to prior work, SRF features are less rank-deficient, more compact, and achieve better kernel approximation, especially for higher-order polynomials. The resulting predictions have lower variance and typically yield better classification accuracy. 1 Introduction Kernel methods such as nonlinear support vector machines (SVMs) [1] provide a powerful framework for nonlinear learning, but they often come with significant computational cost. Their training complexity varies from O(n2) to O(n3), which becomes prohibitive when the number of training examples, n, grows to the millions. Testing also tends to be slow, with an O(nd) complexity for d-dimensional vectors. Explicit kernel maps provide a practical alternative for large-scale applications since they rely on properties of linear methods, which can be trained in O(n) time [2, 3, 4] and applied in O(d) time, independent of n. The idea is to determine an explicit nonlinear map Z(·) : Rd →RD such that K(x, y) ≈⟨Z(x), Z(y)⟩, and to perform linear learning in the resulting feature space. This procedure can utilize the fast training and testing of linear methods while still preserving much of the expressive power of the nonlinear methods. Following this reasoning, Rahimi and Recht [5] proposed a procedure for generating such a nonlinear map, derived from the Monte Carlo integration of an inverse Fourier transform arising from Bochner’s theorem [6]. Explicit nonlinear random feature maps have also been proposed for other types of kernels, such as intersection kernels [7], generalized RBF kernels [8], skewed multiplicative histogram kernels [9], additive kernels [10], and semigroup kernels [11]. Another type of kernel that is used widely in many application domains is the polynomial kernel [12, 13], defined by K(x, y) = (⟨x, y⟩+ q)p, where q is the bias and p is the degree of the polynomial. Approximating polynomial kernels with explicit nonlinear maps is a challenging problem, but substantial progress has been made in this area recently. Kar and Karnick [14] catalyzed this line of 1 research by introducing the Random Maclaurin (RM) technique, which approximates ⟨x, y⟩p by the product Qp i=1⟨wi, x⟩Qp i=1⟨wi, y⟩, where wi is a vector consisting of Bernoulli random variables. Another technique, Tensor Sketch [15], offers further improvement by instead writing ⟨x, y⟩p as ⟨x(p), y(p)⟩, where x(p) is the p-level tensor product of x, and then estimating this tensor product with a convolution of count sketches. Although these methods are applicable to any real-valued input data, in practice polynomial kernels are commonly used on ℓ2-normalized input data [15] because they are otherwise unbounded. Moreover, much of the theoretical analysis developed in former work is based on normalized vectors [16], and it has been shown that utilizing norm information improves the estimates of random projections [17]. Therefore, a natural question to ask is, if we know a priori that data is ℓ2-normalized, can we come up with a better nonlinear map?1 Answering this question is the main focus of this work and will lead us to the development of a new form of kernel approximation. Restricting the input domain to the unit sphere implies that ⟨x, y⟩= 2−2||x−y||2 , ∀x, y ∈Sd−1, so that a polynomial kernel can be viewed as a shift-invariant kernel in this restricted domain. As such, one might expect the random feature maps developed in [5] to be applicable in this case. Unfortunately, this expectation turns out to be false because Bochner’s theorem cannot be applied in this setting. The obstruction is an inherent limitation of polynomial kernels and is examined extensively in Section 3.1. In Section 3.2, we propose an alternative formulation that overcomes these limitations by approximating the Fourier transform of the kernel function as the positive projection of an indefinite combination of Gaussians. We provide a bound on the approximation error of these Spherical Random Fourier (SRF) features in Section 4, and study their performance on a variety of standard datasets including a large-scale experiment on ImageNet in Section 5 and in the Supplementary Material. Compared to prior work, the SRF method is able to achieve lower kernel approximation error with compact nonlinear maps, especially for higher-order polynomials. The variance in kernel approximation error is much lower than that of existing techniques, leading to more stable predictions. In addition, it does not suffer from the rank deficiency problem seen in other methods. Before describing the SRF method in detail, we begin by reviewing the method of Random Fourier Features. 2 Background: Random Fourier Features In [5], a method for the explicit construction of compact nonlinear randomized feature maps was presented. The technique relies on two important properties of the kernel: i) The kernel is shiftinvariant, i.e. K(x, y) = K(z) where z = x−y and ii) The function K(z) is positive definite on Rd. Property (ii) guarantees that the Fourier transform of K(z), k(w) = 1 (2π)d/2 R ddz K(z) ei⟨w,z⟩, admits an interpretation as a probability distribution. This fact follows from Bochner’s celebrated characterization of positive definite functions, Theorem 1. (Bochner [6]) A function K ∈C(Rd) is positive definite on Rd if and only if it is the Fourier transform of a finite non-negative Borel measure on Rd. A consequence of Bochner’s theorem is that the inverse Fourier transform of k(w) can be interpreted as the computation of an expectation, i.e., K(z) = 1 (2π)d/2 Z ddw k(w) e−i⟨w,z⟩ = Ew∼p(w) e−i⟨w,x−y⟩ (1) = 2 E w∼p(w) b∼U(0,2π) cos(⟨w, x⟩+ b) cos(⟨w, y⟩+ b) , where p(w) = (2π)−d/2k(w) and U(0, 2π) is the uniform distribution on [0, 2π). If the above expectation is approximated using Monte Carlo with D random samples wi, then K(x, y) ≈ ⟨Z(x), Z(y)⟩with Z(x) = p 2/D cos(wT 1 x + b1), ..., cos(wT Dx + bD) T . This identification is 1We are not claiming total generality of this setting; nevertheless, in cases where the vector length carries useful information and should be preserved, it could be added as an additional feature before normalization. 2 made possible by property (i), which guarantees that the functional dependence on x and y factorizes multiplicatively in frequency space. Such Random Fourier Features have been used to approximate different types of positive-definite shift-invariant kernels, including the Gaussian kernel, the Laplacian kernel, and the Cauchy kernel. However, they have not yet been applied to polynomial kernels, because this class of kernels does not satisfy the positive-definiteness prerequisite for the application of Bochner’s theorem. This statement may seem counter-intuitive given the known result that polynomial kernels K(x, y) are positive definite kernels. The subtlety is that this statement does not necessarily imply that the associated single variable functions K(z) = K(x −y) are positive definite on Rd for all d. We will prove this fact in the next section, along with the construction of an efficient and effective modification of the Random Fourier method that can be applied to polynomial kernels defined on the unit sphere. 3 Polynomial kernels on the unit sphere In this section, we consider approximating the polynomial kernel defined on Sd−1 × Sd−1, K(x, y) = 1 −||x −y||2 a2 p = α (q + ⟨x, y⟩)p (2) with q = a2/2 −1, α = (2/a2)p. We will restrict our attention to p ≥1, a ≥2. The kernel is a shift-invariant radial function of the single variable z = x −y, which with a slight abuse of notation we write as K(x, y) = K(z) = K(z), with z = ||z||.2 In Section 3.1, we show that the Fourier transform of K(z) is not a non-negative function, so a straightforward application of Bochner’s theorem to produce Random Fourier Features as in [5] is impossible in this case. Nevertheless, in Section 3.2, we propose a fast and accurate approximation of K(z) by a surrogate positive definite function which enables us to construct compact Fourier features. 3.1 Obstructions to Random Fourier Features Because z = ||x −y|| = √ 2 −2 cos θ ≤2, the behavior of K(z) for z > 2 is undefined and arbitrary since it does not affect the original kernel function in eqn. (2). On the other hand, it should be specified in order to perform the Fourier transform, which requires an integration over all values of z. We first consider the natural choice of K(z) = 0 for z > 2 before showing that all other choices lead to the same conclusion. Lemma 1. The Fourier transform of {K(z), z ≤2; 0, z > 2} is not a non-negative function of w for any values of a, p, and d. Proof. (See the Supplementary Material for details). A direct calculation gives, k(w) = p X i=0 p! (p −i)! 1 −4 a2 p−i 2 a2 i 2 w d/2+i Jd/2+i(2w) , where Jν(z) is the Bessel function of the first kind. Expanding for large w yields, k(w)∼ 1 √πw 1−4 a2 p 2 w d/2 cos (d + 1)π 4 −2w , (3) which takes negative values for some w for all a > 2, p, and d. So a Monte Carlo approximation of K(z) as in eqn. (1) is impossible in this case. However, there is still the possibility of defining the behavior of K(z) for z > 2 differently, and in such a way that the Fourier transform is positive and integrable on Rd. The latter condition should hold for all d, since the vector dimensionality d can vary arbitrarily depending on input data. We now show that such a function cannot exist. To this end, we first recall a theorem due to Schoenberg regarding completely monotone functions, 2We also follow this practice in frequency space, i.e. if k(w) is radial, we also write k(w) = k(w). 3 Definition 1. A function f is said to be completely monotone on an interval [a, b] ⊂R if it is continuous on the closed interval, f ∈C([a, b]), infinitely differentiable in its interior, f ⊂C∞((a, b)), and (−1)lf (l)(x) ≥0 , x ∈(a, b), l = 0, 1, 2, . . . Theorem 2. (Schoenberg [18]) A function φ is completely monotone on [0, ∞) if and only if Φ ≡ φ(|| · ||2) is positive definite and radial on Rd for all d. Together with Theorem 1, Theorem 2 shows that φ(z) = K(√z) must be completely monotone if k(w) is to be interpreted as a probability distribution. We now establish that φ(z) cannot be completely monotone and simultaneously satisfy φ(z) = K(√z) for z ≤2. Proposition 1. The function φ(z) = K(√z) is completely monotone on [0, a2]. Proof. From the definition of φ, φ(z) = 1 −z a2 p, φ is continuous on [0, a2], infinitely differentiable on (0, a2), and its derivatives vanish for l > p. They obey (−1)lφ(l)(z) = p! (p−l)! φ(z) (a2−z)l ≥0, where the inequality follows since z < a2. Therefore φ is completely monotone on [0, a2]. Theorem 3. Suppose f is a completely monotone polynomial of degree n on the interval [0, c], c < ∞, with f(c) = 0. Then there is no completely monotone function on [0, ∞) that agrees with f on [0, a] for any nonzero a < c. Proof. Let g ∈C([0, ∞)) T C∞((0, ∞)) be a non-negative function that agrees with f on [0, a] and let h = g −f. We show that for all non-negative integers m there exists a point χm satisfying a < χm ≤c such that h(m)(χm) > 0. For m = 0, the point χ0 = c obeys h(χ0) = g(χ0)−f(χ0) = g(χ0) > 0 by the definition of g. Now, suppose there is a point χm such that a < χm ≤c and h(m)(χm) > 0. The mean value theorem then guarantees the existence of a point χm+1 such that a < χm+1 < χm and h(m+1)(χm+1) = h(m)(χm)−h(m)(a) χm−a = h(m)(χm) χm−a > 0, where we have utilized the fact that h(m)(a) = 0 and the induction hypothesis. Noting that f (m) = 0 for all m > n, this result implies that g(m)(χm) > 0 for all m > n. Therefore g cannot be completely monotone. Corollary 1. There does not exist a finite non-negative Borel measure on Rd whose Fourier transform agrees with K(z) on [0, 2]. 3.2 Spherical Random Fourier features From the section above, we see that the Bochner’s theorem cannot be directly applied to the polynomial kernel. In addition, it is impossible to construct a positive integrable ˆk(w) whose inverse Fourier transform ˆK(z) equals K(z) exactly on [0, 2]. Despite this result, it is nevertheless possible to find ˆK(z) that is a good approximation of K(z) on [0, 2], which is all that is necessary given that we will be approximating ˆK(z) by Monte Carlo integration anyway. We present our method of Spherical Random Fourier (SRF) features in this section. We recall a characterization of radial functions that are positive definite on Rd for all d due to Schoenberg. Theorem 4. (Schoenberg [18]) A continuous function f : [0, ∞) →R is positive definite and radial on Rd for all d if and only if it is of the form f(r) = R ∞ 0 e−r2t2dµ(t), where µ is a finite non-negative Borel measure on [0, ∞). This characterization motivates an approximation for K(z) as a sum of N Gaussians, ˆK(z) = PN i=1 cie−σ2 i z2. To increase the accuracy of the approximation, we allow the ci to take negative values. Doing so enables its Fourier transform (which is also a sum of Gaussians) to become negative. We circumvent this problem by mapping those negative values to zero, ˆk(w) = max 0, N X i=1 ci 1 √ 2σi d e−w2/4σ2 i ! , (4) and simply defining ˆK(z) as its inverse Fourier transform. Owing to the max in eqn. (4), it is not possible to calculate an analytical expression for ˆK(z). Thankfully, this isn’t necessary since we 4 0 0.5 1 1.5 2 0 0.5 1 z K(z) Original Approx 0 20 40 60 80 0 0.5 1x 10 −3 w p(w) (a) p = 10 0 0.5 1 1.5 2 0 0.5 1 z K(z) Original Approx 0 20 40 60 0 0.5 1 1.5x 10 −3 w p(w) (b) p = 20 Figure 1: K(z), its approximation ˆK(z), and the corresponding pdf p(w) for d = 256, a = 2 for polynomial orders (a) 10 and (b) 20. Higher-order polynomials are approximated better, see eqn. (6). Algorithm 1 Spherical Random Fourier (SRF) Features Input: A polynomial kernel K(x, y) = K(z), z = ||x −y||2, ||x||2 = 1, ||y||2 = 1, with bias a ≥2, order p ≥1, input dimensionality d and feature dimensionality D. Output: A randomized feature map Z(·) : Rd →RD such that ⟨Z(x), Z(y)⟩≈K(x, y). 1. Solve argmin ˆ K R 2 0 dz h K(z) −ˆK(z) i2 for ˆk(w), where ˆK(z) is the inverse Fourier transform of ˆk(w), whose form is given in eqn. (4). Let p(w) = (2π)−d/2ˆk(w). 2. Draw D iid samples w1, ..., wD from p(w). 3. Draw D iid samples b1, ..., bD ∈R from the uniform distribution on [0, 2π]. 4. Z(x) = q 2 D cos(wT 1 x + b1), ..., cos(wT Dx + bD) T can evaluate it numerically by performing a one dimensional numerical integral, ˆK(z) = Z ∞ 0 dw w ˆk(w)(w/z)d/2−1Jd/2−1(wz) , which is well-approximated using a fixed-width grid in w and z, and can be computed via a single matrix multiplication. We then optimize the following cost function, which is just the MSE between K(z) and our approximation of it, L = 1 2 Z 2 0 dz h K(z) −ˆK(z) i2 , (5) which defines an optimal probability distribution p(w) through eqn. (4) and the relation p(w) = (2π)−d/2k(w). We can then follow the Random Fourier Feature [5] method to generate the nonlinear maps. The entire SRF process is summarized in Algorithm 1. Note that for any given of kernel parameters (a, p, d), p(w) can be pre-computed, independently of the data. 4 Approximation error The total MSE comes from two sources: error approximating the function, i.e. L from eqn. (5), and error from Monte Carlo sampling. The expected MSE of Monte-Carlo converges at a rate of O(1/D) and a bound on the supremum of the absolute error was given in [5]. Therefore, we focus on analyzing the first type of error. We describe a simple method to obtain an upper bound on L. Consider the function ˆK(z) = e−p a2 z2, which is a special case of eqn. (4) obtained by setting N = 1, ci = 1, and σ1 = p p/a2. The MSE between K(z) and this function thus provides an upper bound to our approximation error, L = 1 2 Z 2 0 dz [ ˆK(z) −K(z)]2 ≤1 2 Z a 0 dz [ ˆK(z) −K(z)]2 = 1 2 Z a 0 dz exp −2p a2 z2 + 1 −z2 a2 2p −2 exp −p a2 z2 1 −z2 a2 p = a 4 r π 2p erf( p 2p) + a 4 √π Γ(p + 1) Γ(p + 3 2) −a 2 √π Γ(p + 1) Γ(p + 3 2)M( 1 2, p + 3 2, −p) . 5 9 10 11 12 13 14 0 1 2 3 x 10 −3 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 1 2 3 4 5 x 10 −3 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 2 4 6 8 x 10 −3 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 0.005 0.01 0.015 0.02 0.025 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 1 2 3 x 10 −3 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 2 4 6 x 10 −3 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 0.002 0.004 0.006 0.008 0.01 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 0.01 0.02 0.03 0.04 0.05 log (Dimensionality) MSE RM TS SRF 9 10 11 12 13 14 0 1 2 3 4 5 x 10 −3 log (Dimensionality) MSE RM TS SRF (a) p = 3 9 10 11 12 13 14 0 2 4 6 x 10 −3 log (Dimensionality) MSE RM TS SRF (b) p = 7 9 10 11 12 13 14 0 1 2 3 4 5 x 10 −3 log (Dimensionality) MSE RM TS SRF (c) p = 10 9 10 11 12 13 14 0 0.005 0.01 0.015 0.02 0.025 log (Dimensionality) MSE RM TS SRF (d) p = 20 Figure 2: Comparison of MSE of kernel approximation on different datasets with various polynomial orders (p) and feature map dimensionalities. The first to third rows show results of usps, gisette, adult, respectively. SRF gives better kernel approximation, especially for large p. In the first line we have used the fact that integrand is positive and a ≥2. The three terms on the second line are integrated using the standard integral definitions of the error function, beta function, and Kummer’s confluent hypergeometric function [19], respectively. To expose the functional dependence of this result more clearly, we perform an expansion for large p. We use the asymptotic expansions of the error function and the Gamma function, erf(z) = 1 −e−z2 z√π ∞ X k=0 (−1)k (2k −1)!! (2z2)k , log Γ(z) = z log z −z −1 2 log z 2π + ∞ X k=2 Bk k(k −1)z1−k , where Bk are Bernoulli numbers. For the third term, we write the series representation of M(a, b, z), M(a, b, z) = Γ(b) Γ(a) ∞ X k=0 Γ(a + k) Γ(b + k) zk k! , expand each term for large p, and sum the result. All together, we obtain the following bound, L ≤105 4096 rπ 2 a p5/2 , (6) which decays at a rate of O(p−2.5) and becomes negligible for higher-order polynomials. This is remarkable, as the approximation error of previous methods increases as a function of p. Figure 1 shows two kernel functions K(z), their approximations ˆK(z), and the corresponding pdfs p(w). 5 Experiments We compare the SRF method with Random Maclaurin (RM) [14] and Tensor Sketch (TS) [15], the other polynomial kernel approximation approaches. Throughout the experiments, we choose the number of Gaussians, N, to equal 10, though the specific number had negligible effect on the results. The bias term is set as a = 4. Other choices such as a = 2, 3 yield similar performance; results with a variety of parameter settings can be found in the Supplementary Material. The error bars and standard deviations are obtained by conducting experiments 10 times across the entire dataset. 6 Dataset Method D = 29 D = 210 D = 211 D = 212 D = 213 D = 214 Exact usps p = 3 RM 87.29 ± 0.87 89.11 ± 0.53 90.43 ± 0.49 91.09 ± 0.44 91.48 ± 0.31 91.78 ± 0.32 TS 89.85 ± 0.35 90.99 ± 0.42 91.37 ± 0.19 91.68 ± 0.19 91.85 ± 0.18 91.90 ± 0.23 96.21 SRF 90.91 ± 0.32 92.08 ± 0.32 92.50 ± 0.48 93.10 ± 0.26 93.31 ± 0.16 93.28 ± 0.24 usps p = 7 RM 88.86 ± 1.08 91.01 ± 0.44 92.70 ± 0.38 94.03 ± 0.30 94.54 ± 0.30 94.97 ± 0.26 TS 92.30 ± 0.52 93.59 ± 0.20 94.53 ± 0.20 94.84 ± 0.10 95.06 ± 0.23 95.27 ± 0.12 96.51 SRF 92.44 ± 0.31 93.85 ± 0.32 94.79 ± 0.19 95.06 ± 0.21 95.37 ± 0.12 95.51 ± 0.17 usps p = 10 RM 88.95 ± 0.60 91.41 ± 0.46 93.27 ± 0.28 94.29 ± 0.34 95.19 ± 0.21 95.53 ± 0.25 TS 92.41 ± 0.48 93.85 ± 0.34 94.75 ± 0.26 95.31 ± 0.28 95.55 ± 0.25 95.91 ± 0.17 96.56 SRF 92.63 ± 0.46 94.33 ± 0.33 95.18 ± 0.26 95.60 ± 0.27 95.78 ± 0.23 95.85 ± 0.16 usps p = 20 RM 88.67 ± 0.98 91.09 ± 0.42 93.22 ± 0.39 94.32 ± 0.27 95.24 ± 0.27 95.62 ± 0.24 TS 91.73 ± 0.88 93.92 ± 0.28 94.68 ± 0.28 95.26 ± 0.31 95.90 ± 0.20 96.07 ± 0.19 96.81 SRF 92.27 ± 0.48 94.30 ± 0.46 95.48 ± 0.39 95.97 ± 0.32 96.18 ± 0.23 96.28 ± 0.15 gisette p = 3 RM 89.53 ± 1.43 92.77 ± 0.40 94.49 ± 0.48 95.90 ± 0.31 96.69 ± 0.33 97.01 ± 0.26 TS 93.52 ± 0.60 95.28 ± 0.71 96.12 ± 0.36 96.76 ± 0.40 97.06 ± 0.19 97.12 ± 0.27 98.00 SRF 91.72 ± 0.92 94.39 ± 0.65 95.62 ± 0.47 96.50 ± 0.40 96.91 ± 0.36 97.05 ± 0.19 gisette p = 7 RM 89.44 ± 1.44 92.77 ± 0.57 95.15 ± 0.60 96.37 ± 0.46 96.90 ± 0.46 97.27 ± 0.22 TS 92.89 ± 0.66 95.29 ± 0.39 96.32 ± 0.47 96.66 ± 0.34 97.16 ± 0.25 97.58 ± 0.25 97.90 SRF 92.75 ± 1.01 94.85 ± 0.53 96.42 ± 0.49 97.07 ± 0.30 97.50 ± 0.24 97.53 ± 0.15 gisette p = 10 RM 89.91 ± 0.58 93.16 ± 0.40 94.94 ± 0.72 96.19 ± 0.49 96.88 ± 0.23 97.15 ± 0.40 TS 92.48 ± 0.62 94.61 ± 0.60 95.72 ± 0.53 96.60 ± 0.58 96.99 ± 0.28 97.41 ± 0.20 98.10 SRF 92.42 ± 0.85 95.10 ± 0.47 96.35 ± 0.42 97.15 ± 0.34 97.57 ± 0.23 97.75 ± 0.14 gisette p = 20 RM 89.40 ± 0.98 92.46 ± 0.67 94.37 ± 0.55 95.67 ± 0.43 96.14 ± 0.55 96.63 ± 0.40 TS 90.49 ± 1.07 92.88 ± 0.42 94.43 ± 0.69 95.41 ± 0.71 96.24 ± 0.44 96.97 ± 0.28 98.00 SRF 92.12 ± 0.62 94.22 ± 0.45 95.85 ± 0.54 96.94 ± 0.29 97.47 ± 0.24 97.75 ± 0.32 Table 1: Comparison of classification accuracy (in %) on different datasets for different polynomial orders (p) and varying feature map dimensionality (D). The Exact column refers to the accuracy of exact polynomial kernel trained with libSVM. More results are given in the Supplementary Material. 0 200 400 600 800 1000 −2.5 −2 −1.5 −1 −0.5 0 0.5 Eigenvalue Rank log (Eigenvalue Ratio) RM TS SRF CRAFT RM CRAFT TS CRAFT SRF (a) 9 10 11 12 13 14 0 1 2 3 4 5 6 7 x 10 −3 log (Dimensionality) MSE RM TS SRF CRAFT RM CRAFT TS CRAFT SRF (b) 9 10 11 12 13 14 88 89 90 91 92 log (Dimensionality) Accuracy % RM TS SRF CRAFT RM CRAFT TS CRAFT SRF (c) Figure 3: Comparison of CRAFT features on usps dataset with polynomial order p = 10 and feature maps of dimension D = 212. (a) Logarithm of ratio of ith-leading eigenvalue of the approximate kernel to that of the exact kernel, constructed using 1,000 points. CRAFT features are projected from 214 dimensional maps. (b) Mean squared error. (c) Classification accuracy. Kernel approximation. The main focus of this work is to improve the quality of kernel approximation, which we measure by computing the mean squared error (MSE) between the exact kernel and its approximation across the entire dataset. Figure 2 shows MSE as a function of the dimensionality (D) of the nonlinear maps. SRF provides lower MSE than other methods, especially for higher order polynomials. This observation is consistent with our theoretical analysis in Section 4. As a corollary, SRF provides more compact maps with the same kernel approximation error. Furthermore, SRF is stable in terms of the MSE, whereas TS and RM have relatively large variance. Classification with linear SVM. We train linear classifiers with liblinear [3] and evaluate classification accuracy on various datasets, two of which are summarized in Table 1; additional results are available in the Supplementary Material. As expected, accuracy improves with higher-dimensional nonlinear maps and higher-order polynomials. It is important to note that better kernel approximation does not necessarily lead to better classification performance because the original kernel might not be optimal for the task [20, 21]. Nevertheless, we observe that SRF features tend to yield better classification performance in most cases. Rank-Deficiency. Hamid et al. [16] observe that RM and TS produce nonlinear features that are rank deficient. Their approximation quality can be improved by first mapping the input to a higher dimensional feature space, and then randomly projecting it to a lower dimensional space. This method is known as CRAFT. Figure 3(a) shows the logarithm of the ratio of the ith eigenvalue 7 1000 2000 3000 4000 5000 0 0.1 0.2 0.3 0.4 0.5 Dimensionality Time (sec) RM TS SRF (a) d = 1000 1000 2000 3000 4000 5000 0 1 2 3 4 5 Dimensionality Time (sec) RM TS SRF (b) d = D Figure 4: Computational time to generate randomized feature map for 1,000 random samples on a fixed hardware with p = 3. (a) d = 1, 000. (b) d = D. 5 10 15 20 25 46 48 50 52 54 Training Examples (M) Top−1 Test Error (%) SRF RFF Figure 5: Doubly stochastic gradient learning curves with RFF and SRF features on ImageNet. of the various approximate kernel matrices to that of the exact kernel. For a full-rank, accurate approximation, this value should be constant and equal to zero, which is close to the case for SRF. RM and TS deviate from zero significantly, demonstrating their rank-deficiency. Figures 3(b) and 3(c) show the effect of the CRAFT method on MSE and classification accuracy. CRAFT improves RM and TS but it has no or even a negative effect on SRF. These observations all indicate that the SRF is less rank-deficient than RM and TS. Computational Efficiency. Both RM and SRF have computational complexity O(ndD), whereas TS scales as O(np(d + D log D)), where D is the number of nonlinear maps, n is the number of samples, d is the original feature dimension, and p is the polynomial order. Therefore the scalability of TS is better than SRF when D is of the same order as d (O(D log D) vs. O(D2)). However, the computational cost of SRF does not depend on p, making SRF more efficient for higher-order polynomials. Moreover, there is little computational overhead involved in the SRF method, which enables it to outperform TS for practical values of D, even though it is asymptotically inferior. As shown in Figure 4(a), even for the low-order case (p = 3), SRF is more efficient than TS for a fixed d = 1000. In Figure 4(b), where d = D, SRF is still more efficient than TS up to D ≲4000. Large-scale Learning. We investigate the scalability of the SRF method on the ImageNet 2012 dataset, which consists of 1.3 million 256 × 256 color images from 1000 classes. We employ the doubly stochastic gradient method of Dai et al. [22], which utilizes two stochastic approximations — one from random training points and the other from random features associated with the kernel. We use the same architecture and parameter settings as [22] (including the fixed convolutional neural network parameters), except we replace the RFF kernel layer with an ℓ2 normalization step and an SRF kernel layer with parameters a = 4 and p = 10. The learning curves in Figure 5 suggest that SRF features may perform better than RFF features on this large-scale dataset. We also evaluate the model with multi-view testing, in which max-voting is performed on 10 transformations of the test set. We obtain Top-1 test error of 44.4%, which is comparable to the 44.5% error reported in [22]. These results demonstrate that the unit norm restriction does not have a negative impact on performance in this case, and that polynomial kernels can be successfully scaled to large datasets using the SRF method. 6 Conclusion We have described a novel technique to generate compact nonlinear features for polynomial kernels applied to data on the unit sphere. It approximates the Fourier transform of kernel functions as the positive projection of an indefinite combination of Gaussians. It achieves more compact maps compared to the previous approaches, especially for higher-order polynomials. SRF also shows less feature redundancy, leading to lower kernel approximation error. Performance of SRF is also more stable than the previous approaches due to reduced variance. Moreover, the proposed approach could easily extend beyond polynomial kernels: the same techniques would apply equally well to any shift-invariant radial kernel function, positive definite or not. In the future, we would also like to explore adaptive sampling procedures tuned to the training data distribution in order to further improve the kernel approximation accuracy, especially when D is large, i.e. when the Monte-Carlo error is low and the kernel approximation error dominates. Acknowledgments. We thank the anonymous reviewers for their valuable feedback and Bo Xie for facilitating experiments with the doubly stochastic gradient method. 8 References [1] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995. [2] T. Joachims. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 217–226. ACM, 2006. [3] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. Liblinear: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874, 2008. [4] Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro, and Andrew Cotter. Pegasos: Primal estimated subgradient solver for SVM. Mathematical Programming, 127(1):3–30, 2011. [5] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems, pages 1177–1184, 2007. [6] Salomon Bochner. Harmonic analysis and the theory of probability. Dover Publications, 1955. [7] Subhransu Maji and Alexander C Berg. Max-margin additive classifiers for detection. In International Conference on Computer Vision, pages 40–47. IEEE, 2009. [8] V Sreekanth, Andrea Vedaldi, Andrew Zisserman, and C Jawahar. Generalized RBF feature maps for efficient detection. In British Machine Vision Conference, 2010. [9] Fuxin Li, Catalin Ionescu, and Cristian Sminchisescu. Random fourier approximations for skewed multiplicative histogram kernels. In Pattern Recognition, pages 262–271. Springer, 2010. [10] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3):480–492, 2012. [11] Jiyan Yang, Vikas Sindhwani, Quanfu Fan, Haim Avron, and Michael Mahoney. Random laplace feature maps for semigroup kernels on histograms. In Computer Vision and Pattern Recognition (CVPR), pages 971–978. IEEE, 2014. [12] Hideki Isozaki and Hideto Kazawa. Efficient support vector classifiers for named entity recognition. In Proceedings of the 19th International Conference on Computational Linguistics-Volume 1, pages 1–7. Association for Computational Linguistics, 2002. [13] Kwang In Kim, Keechul Jung, and Hang Joon Kim. Face recognition using kernel principal component analysis. Signal Processing Letters, IEEE, 9(2):40–42, 2002. [14] Purushottam Kar and Harish Karnick. Random feature maps for dot product kernels. In International Conference on Artificial Intelligence and Statistics, pages 583–591, 2012. [15] Ninh Pham and Rasmus Pagh. Fast and scalable polynomial kernels via explicit feature maps. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 239–247. ACM, 2013. [16] Raffay Hamid, Ying Xiao, Alex Gittens, and Dennis Decoste. Compact random feature maps. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 19–27, 2014. [17] Ping Li, Trevor J Hastie, and Kenneth W Church. Improving random projections using marginal information. In Learning Theory, pages 635–649. Springer, 2006. [18] Isaac J Schoenberg. Metric spaces and completely monotone functions. Annals of Mathematics, pages 811–841, 1938. [19] EE Kummer. De integralibus quibusdam definitis et seriebus infinitis. Journal f¨ur die reine und angewandte Mathematik, 17:228–242, 1837. [20] Felix X Yu, Sanjiv Kumar, Henry Rowley, and Shih-Fu Chang. Compact nonlinear maps and circulant extensions. arXiv preprint arXiv:1503.03893, 2015. [21] Dmitry Storcheus, Mehryar Mohri, and Afshin Rostamizadeh. Foundations of coupled nonlinear dimensionality reduction. arXiv preprint arXiv:1509.08880, 2015. [22] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, Maria-Florina F Balcan, and Le Song. Scalable kernel methods via doubly stochastic gradients. In Advances in Neural Information Processing Systems, pages 3041–3049, 2014. 9 | 2015 | 392 |
5,916 | A Dual-Augmented Block Minimization Framework for Learning with Limited Memory Ian E.H. Yen ∗ Shan-Wei Lin † Shou-De Lin † ∗University of Texas at Austin † National Taiwan University ∗ianyen@cs.utexas.edu {r03922067,sdlin}@csie.ntu.edu.tw Abstract In past few years, several techniques have been proposed for training of linear Support Vector Machine (SVM) in limited-memory setting, where a dual blockcoordinate descent (dual-BCD) method was used to balance cost spent on I/O and computation. In this paper, we consider the more general setting of regularized Empirical Risk Minimization (ERM) when data cannot fit into memory. In particular, we generalize the existing block minimization framework based on strong duality and Augmented Lagrangian technique to achieve global convergence for general convex ERM. The block minimization framework is flexible in the sense that, given a solver working under sufficient memory, one can integrate it with the framework to obtain a solver globally convergent under limited-memory condition. We conduct experiments on L1-regularized classification and regression problems to corroborate our convergence theory and compare the proposed framework to algorithms adopted from online and distributed settings, which shows superiority of the proposed approach on data of size ten times larger than the memory capacity. 1 Introduction Nowadays data of huge scale are prevalent in many applications of statistical learning and data mining. It has been argued that model performance can be boosted by increasing both number of samples and features, and through crowdsourcing technology, annotated samples of terabytes storage size can be generated [3]. As a result, the performance of model is no longer limited by the sample size but the amount of available computational resources. In other words, the data size can easily go beyond the size of physical memory of available machines. Under this setting, most of learning algorithms become slow due to expensive I/O from secondary storage device [26]. When it comes to huge-scale data, two settings are often considered — online and distributed learning. In the online setting, each sample is processed only once without storage, while in the distributed setting, one has several machines that can jointly fit the data into memory. However, the real cases are often not as extreme as these two — there are usually machines that can fit part of the data, but not all of them. In this setting, an algorithm can only process a block of data at a time. Therefore, balancing the time spent on I/O and computation becomes the key issue [26]. Although one can employ an online-fashioned learning algorithm in this setting, it has been observed that online method requires large number of epoches to achieve comparable performance to batch method, and at each epoch it spends most of time on I/O instead of computation [2, 21, 26]. The situation for online method could become worse for problem of non-smooth, non-strongly convex objective function, where a qualitatively slower convergence of online method is exhibited [15, 16] than that proved for strongly-convex problem like SVM [14]. In the past few years, several algorithms have been proposed to solve large-scale linear Support Vector Machine (SVM) in the limited memory setting [2, 21, 26]. These approaches are based on a dual 1 Block Coordinate Descent (dual-BCD) algorithim, which decomposes the original problem into a series of block sub-problems, each of them requires only a block of data loaded into memory. The approach was proved linearly convergent to the global optimum, and demonstrated fast convergence empirically. However, the convergence of the algorithm relies on the assumption of a smooth dual problem, which, as we show, does not hold generally for other regularized Empirical Risk Minimizaton (ERM) problem. As a result, although the dual-BCD approach can be extended to the more general setting, it is not globally convergent except for a class of problems with L2-regularizer. In this paper, we first show how to adapt the dual block-coordinate descnet method of [2, 26] to the general setting of regularized Empirical Risk Mimization (ERM), which subsumes most of supervised learning problems ranging from classification, regression to ranking and recommendation. Then we discuss the convergence issue arises when the underlying ERM is not strongly-convex. A Primal Proximal Point ( or Dual Augmented Lagrangian ) method is then proposed to address this issue, which as we show, results in a block minimization algorithm with global convergence to optimum for convex regularized ERM problems. The framework is flexible in the sense that, given a solver working under sufficient-memory condition, it can be integrated into the block minimization framework to obtain a solver globally convergent under limited-memory condition. We conduct experiments on L1-regularized classification and regression problems to corroborate our convergence theory, which shows that the proposed simple dual-augmented technique changes the convergence behavior dramatically. We also compare the proposed framework to algorithms adopted from online and distributed settings. In particular, we describe how to adapt a distributed optimization framework — Alternating Direction Method of Multiplier (ADMM) [1] — to the limitedmemory setting, and show that, although the adapted algorithm is effective, it is not as efficient as the proposed framework specially designed for limited-memory setting. Note our experiment does not adapt into comparison some recently proposed distributed learning algorithms (CoCoA etc.) [7, 10] that only apply to ERM with L2-regularizer or some other distributed method designed for some specific loss function [19]. 2 Problem Setup In this work, we consider the regularized Empirical Risk Minimization problem, which given a data set D = {(Φn, yn)}N n=1, estimates a model through min w∈Rd,ξn∈Rp F(w, ξ) = R(w) + N X n=1 Ln(ξn) s.t. Φnw = ξn, n ∈[N] (1) where w ∈Rd is the model parameter to be estimated, Φn is a p by d design matrix that encodes features of the n-th data sample, Ln(ξn) is a convex loss function that penalizes the discrepancy between ground truth and prediction vector ξn ∈Rp, and R(w) is a convex regularization term penalizing model complexity. The formulation (1) subsumes a large class of statistical learning problems ranging from classification [27], regression [17], ranking [8], and convex clustering [24]. For example, in classification problem, we have p = |Y| where Y consists of the set of all possible labels and Ln(ξ) can be defined as the logistic loss Ln(ξ) = log(P k∈Y exp(ξk)) −ξyn as in logistic regression or the hinge loss Ln(ξ) = maxk∈Y(1−δk,yn +ξk −ξyn) as used in support vector machine; in a (multi-task) regression problem, the target variable consists of K real values Y = RK, the prediction vector has p = K dimensions, and a square loss Ln(ξ) = 1 2∥ξ−yn∥2 2 is often used. There are also a variety of regularizers R(w) employed in different applications, which includes the L2-regularizer R(w) = λ 2 ∥w∥2 in ridge regression, L1-regularizer R(w) = λ∥w∥1 in Lasso, nuclear-norm R(w) = λ∥w∥∗in matrix completion, and a family of structured group norms R(w) = λ∥w∥G [11]. Although the specific form of Ln(ξ), R(w) does not affect the implementation of the limited-memory training procedure, two properties of the functions — strong convexity and smoothness — have key effects on the behavior of the block minimization algorithm. 2 Definition 1 (Strong Convexity). A function f(x) is strongly convex iff it is lower bounded by a simple quadratic function f(y) ≥f(x) + ∇f(x)T (y −x) + m 2 ∥x −y∥2 (2) for some constant m > 0 and ∀x, y ∈dom(f). Definition 2 (Smoothness). A function f(x) is smooth iff it is upper bounded by a simple quadratic function f(y) ≤f(x) + ∇f(x)T (y −x) + M 2 ∥x −y∥2 (3) for some constant 0 ≤M < ∞and ∀x, y ∈dom(f). For instance, the square loss and logistic loss are both smooth and strongly convex 1, while the hingeloss satisfies neither of them. On the other hand, most of regularizers such as L1-norm, structured group norm, and nuclear norm are neither smooth nor strongly convex, except for the L2-regularizer, which satifies both. In the following we will demonstrate the effects of these properties to Block Minimization algorithms. Throughout this paper, we will assume that a solver for (1) that works in sufficient-memory condition is given, and our task is to design an algorithmic framework that integrates with the solver to efficiently solve (1) when data cannot fit into memory. We will assume, however, that the d-dimensional parameter vector w can be fit into memory. 3 Dual Block Minimization In this section, we extend the block minimization framework of [26] from linear SVM to the general setting of regularized ERM (1).The dual of (1) can be expressed as min µ∈Rd,αn∈Rp R∗(−µ) + N X n=1 L∗ n(αn) s.t. N X n=1 ΦT nαn = µ (4) where R∗(−µ) is the convex conjugate of R(w) and L∗ n(αn) is the convex conjugate of Ln(ξn). The block minimization algorithm of [26] basically performs a dual Block-Coordinate Descent (dual-BCD) over (4) by dividing the whole data set D into K blocks DB1, ..., DBK, and optimizing a block of dual variables (αBk, µ) at a time, where DBk = {(Φn, yn)}n∈Bk and αBk = {αn|n ∈Bk}. In [26], the dual problem (4) is derived explicitly in order to perform the algorithm. However, for many sparsity-inducing regularizer such as L1-norm and nuclear norm, it is more efficient and convenient to solve (1) in the primal [6, 28]. Therefore, here instead of explicitly forming the dual problem, we express it implicitly as G(α) = min w,ξ L(α, w, ξ), (5) where L(α, w, ξ) is the Lagrangian function of (1), and maximize (5) w.r.t. a block of variables αBk from the primal instead of dual by strong duality max αBk min w,ξ L(α, w, ξ) = min w,ξ max αBk L(α, w, ξ) (6) with other dual variables {αBj = αt Bj}j̸=k fixed. The maximization of dual variables αBk in (6) then enforces the primal equalities Φnw = ξn, n ∈Bk, which results in the block minimization problem min w∈Rd,ξn∈Rp R(w) + X n∈Bk Ln(ξn) + µtT Bkw s.t. Φnw = ξn, n ∈Bk, (7) 1The logistic loss is strongly convex when its input ξ are within a bounded range, which is true as long as we have a non-zero regularizer R(w). 3 where µt Bk = P n/∈Bk ΦT nαt n. Note that, in (7), variables {ξn}n/∈Bk have been dropped since they are not relevant to the block of dual variables αBk, and thus given the d dimensional vector µt Bk, one can solve (7) without accessing data {(Φn, yn)}n/∈Bk outside the block Bk. Throughout the dual-BCD algorithm, we maintain d-dimensional vector µt = PN n=1 ΦT nαt n and compute µt B via µt B = µt − X n∈Bk ΦT nαt n (8) in the beginning of solving each block subproblem (7). Since subproblem (7) is of the same form to the original problem (1) except for one additional linear augmented term µT Bkw, one can adapt the solver of (1) to solve (7) easily by providing an augmented version of the gradient ∇w ¯F(w, ξ) = ∇wF(w, ξ) + µt Bk to the solver, where ¯F(.) denotes the function with augmented terms and F(.) denotes the function without augmented terms. Note the augmented term µt Bk is constant and separable w.r.t. coordinates, so it adds little overhead to the solver. After obtaining solution (w∗, ξ∗ Bk) from (7), we can derive the corresponding optimal dual variables αBk for (6) according to the KKT condition and maintain µ subsequently by αt+1 n = ∇ξnLn(ξ∗ n), n ∈Bk (9) µt+1 = µt Bk + X n∈Bk ΦT nαt+1 n . (10) The procedure is summarized in Algorithm 1, which requires a total memory capacity of O(d + |DBk| + p|Bk|). The factor d comes from the storage of µt, wt, factor |DBk| comes from the storage of data block, and the factor p|Bk| comes from the storage of αBk. Note this requires the same space complexity as that required in the original algorithm proposed for linear SVM [26], where p = 1 for the binary classification setting. 4 Dual-Augmented Block Minimization The Block Minimization Algorithm 1, though can be applied to the general regularized ERM problem (1), it is not guaranteed that the sequence {αt}∞ t=0 produced by Algorithm 1 converges to global optimum of (1). In fact, the global convergence of Algorithm 1 only happens for some special cases. One sufficient condition for the global convergence of a Block-Coordinate Descent algorithm is that the terms in objective function that are not separable w.r.t. blocks must be smooth (Definition 2). The dual objective function (4) (expressed using only α) comprises two terms R∗(−PN n=1 ΦT nαn) + PN n=1 L∗ n(αn), where second term is separable w.r.t. to {αn}N n=1, and thus is also separable w.r.t. {αBk}K k=1, while the first term couples variables αB1, ..., αBK involving all the blocks. As a result, if R∗(−µ) is a smooth function according to Definition 2, then Algorithm 1 has global convergence to the optimum. However, the following theorem states this is true only when R(w) is strongly convex. Theorem 1 (Strong/Smooth Duality). Assume f(.) is closed and convex. Then f(.) is smooth with parameter M if and only if its convex conjugate f ∗(.) is strongly convex with parameter m = 1 M . A proof of above theorem can be found in [9]. According to Theorem 1, the Block Minimization Algorithm 1 is not globally convergent if R(w) is not strongly convex, which however, is the case for most of regularizers other than the L2-norm R(w) = 1 2∥w∥2, as discussed in Section 2. In this section, we propose a remedy to this problem, which by a Dual-Augmented Lagrangian method (or equivalently, Primal Proximal Point method), creates a dual objective function of desired property that iteratively approaches the original objective (1), and results in fast global convergence of the dual-BCD approach. 4 Algorithm 1 Dual Block Minimization 1. Split data D into blocks B1, B2, ..., BK. 2. Initialize µ0 = 0. for t = 0, 1, ... do 3.1. Draw k uniformly from [K]. 3.2. Load DBk and αt Bk into memory. 3.3. Compute µt Bk from (8). 3.4. Solve (7) to obtain (w∗, ξ∗ Bk). 3.5. Compute αt+1 Bk by (9). 3.6. Maintain µt+1 through (10). 3.7. Save αt+1 Bk out of memory. end for Algorithm 2 Dual-Aug. Block Minimization 1. Split data D into blocks B1, B2, ..., BK. 2. Initialize w0 = 0, µ0 = 0. for t = 0, 1, ... (outer iteration) do for s = 0, 1, ..., S do 3.1.1. Draw k uniformly from [K]. 3.1.2. Load DBk, αs Bk into memory. 3.1.3. Compute µs Bk from (15). 3.1.4. Solve (14) to obtain (w∗, ξ∗ Bk). 3.1.5. Compute αs+1 Bk by (16). 3.1.6. Maintain µs+1 through (17). 3.1.7. Save αs+1 Bk out of memory. end for 3.2. wt+1 = w∗(αS). end for 4.1 Algorithm The Dual Augmented Lagrangian (DAL) method (or equivalently, Proximal Point Method) modifies the original problem by introducing a sequence of Proximal Maps wt+1 = arg min w F(w) + 1 2ηt ∥w −wt∥2, (11) where F(w) denotes the ERM problem (1) Under this simple modification, instead of doing BlockCoordinate Descent in the dual of original problem (1), we perform Dual-BCD on the proximal subproblem (11). As we show in next section, the dual formulation of (11) has the required property for global convergence of the Dual BCD algorithm — all terms involving more than one block of variables αBk are smooth. Given the current iterate wt, the Dual-Augmented Block Minimization algorithm optimizes the dual of proximal-point problem (11) w.r.t. one block of variables αBk at a time, keeping others fixed {αBj = α(t,s) Bj }j̸=k: max αBk min w,ξ L(w, ξ, α) = min w,ξ max αBk L(w, ξ, α) (12) where L(.) is the Lagrangian of (11) L(w, ξ, α) = F(w, ξ) + N X n=1 αT n(Φnw −ξn) + 1 2ηt ∥w −wt∥2. (13) Once again, the maximization w.r.t. αBk in (12) enforces the equalities Φnw = ξn, n ∈Bk and thus leads to a primal sub-problem involving only data in block Bk: min w∈Rd,ξn∈Rp R(w) + X n∈Bk Ln(ξn) + µ(t,s)T Bk w + 1 2ηt ∥w −wt∥2 s.t. Φnw = ξn, n ∈Bk, (14) where µ(t,s) Bk = P n/∈Bk ΦT nα(t,s) n . Note that (14) is almost the same as (7) except that it has a proximal-point augmented term. Therefore, one can follow the same procedure as in Algorithm 1 to maintain the vector µ(t,s) = PN n=1 ΦT nα(t,s) n and computes µ(t,s) Bk = µ(t,s) − X n∈Bk ΦT nα(t,s) n (15) before solving each block subproblem (14). After obtaining solution (w∗, ξ∗ Bk) from (14), we update dual variables αBk as α(t,s+1) n = ∇ξnLn(ξ∗ n), n ∈Bk. (16) and maintain µ subsequently as µ(t,s+1) = µ(t,s) Bk + X n∈Bk ΦT nα(t,s+1) n . (17) 5 The sub-problem (14) is of similar form to the original ERM problem (1). Since the augmented term is a simple quadratic function separable w.r.t. each coordinate, given a solver for (1) working in sufficient-memory condition, one can easily adapt it by modifying ∇w ¯F(w, ξ) = ∇wF(w, ξ) + µt Bk + (w −wt)/ηt ∇2 w ¯F(w, ξ) = ∇2 wF(w, ξ) + I/ηt, where ¯F(.) denotes the function with augmented terms and F(.) denotes the function without augmented terms. The Block Minimization procedure is repeated until every sub-problem (14) reaches a tolerance ϵin. Then the proximal point method update wt+1 = w∗(α(t,s)) is performed, where w∗(α(t,s)) is the solution of (14) for the latest dual iterate α(t,s). The resulting algorithm is summarized in Algorithm 2. 4.2 Analysis In this section, we analyze the convergence rate of Algorithm 2 to the optimum of (1). First, we show that the proximal-point formulation (11) has a dual problem with desired property for the global convergence of Block-Coordinate Descent. In particular, since the dual of (11) takes the form min αn∈Rp ˜R∗(− N X n=1 ΦT nαn) + N X n=1 L∗ n(αn) (18) where ˜R∗(.) is the convex conjugate of ˜R(w) = R(w)+ 1 2ηt ∥w−wt∥2, and since ˜R(w) is strongly convex with parameter m = 1/ηt, the convex conjugate ˜R∗(.) is smooth with parameter M = ηt according to Theorem 1. Therefore, (18) is in the composite form of a convex, smooth function plus a convex, block-separable function. This type of function has been widely studied in the literature of Block-Coordinate Descent [13]. In particular, one can show that a Block-Coordinate Descent applied on (18) has global convergence to optimum with a fast rate by the following theorem. Theorem 2 (BCD Convergence). Let the sequence {αs}∞ s=1 be the iterates produced by Block Coordinate Descent in the inner loop of Algorithm 2, and K be the number of blocks. Denote ˜F ∗(α) as the dual objective function of (18) and ˜F ∗ opt the optimal value of (18). Then with probability 1−ρ, ˜F ∗(αs) −˜F ∗ opt ≤ϵ, for s ≥βK log( ˜F ∗(α0) −˜F ∗ opt ρϵ ) (19) for some constant β > 0 if (i) Ln(.) is smooth, or (ii) Ln(.) is polyhedral function and R(.) is also polyhedral or smooth. Otherwise, for any convex Ln(.), R(.) we have ˜F ∗(αs) −˜F ∗ opt ≤ϵ, for s ≥cK ϵ log( ˜F ∗(α0) −˜F ∗ opt ρϵ ) (20) for some constant c > 0. Note the above analysis (in appendix) does not assume exact solution of each block subproblem. Instead, it only assumes each block minimization step leads to a dual ascent amount proportional to that produced by a single (dual) proximal gradient ascent step on the block of dual variables. For the outer loop of Primal Proximal-Point (or Dual Augmented Lagrangian) iterates (11), we show the following convergence theorem. Theorem 3 (Proximal Point Convergence). Let F(w) be objective of the regularized ERM problem (1), and R = maxv maxw{∥v −w∥: F(w) ≤F(w0), F(v) ≤F(w0)} be the radius of initial level set. The sequence {wt}∞ t=1 produced by the Proximal-Point update (11) with ηt = η has F(wt+1) −Fopt ≤ϵ, for t ≥τ log(ω ϵ ). (21) for some constant τ, ω > 0 if both Ln(.) and R(.) are (i) strictly convex and smooth or (ii) polyhedral. Otherwise, for any convex F(w) we have F(wt+1) −Fopt ≤R2/(2ηt). 6 The following theorem further shows that solving sub-problem (11) inexactly with tolerance ϵ/t suffices for convergence to ϵ overall precision, where t is the number of outer iterations required. Theorem 4 (Inexact Proximal Map). Suppose, for a given dual iterate wt, each sub-problem (11) is solved inexactly s.t. the solution ˆwt+1 has ∥ˆwt+1 −proxηtF (wt)∥≤ϵ0. (22) Then let { ˆwt}∞ t=1 be the sequence of iterates produced by inexact proximal updates and {wt}∞ t=1 as that generated by exact updates. After t iterations, we have ∥ˆwt −wt∥≤tϵ0. (23) Note for Ln(.), R(.) being strictly convex and smooth, or polyhedral, t is of order O(log(1/ϵ)), and thus it only requires O(K log(1/ϵ) log(t/ϵ)) = O(K log2(1/ϵ)) overall number of block minimization steps to achieve ϵ suboptimality. Otherwise, as long as Ln(.) is smooth, for any convex regularizer R(.), t is of order O(1/ϵ), so it requires O(K(1/ϵ) log(t/ϵ)) = O( K log(1/ϵ) ϵ ) total number of block minimization steps. 4.3 Practical Issues 4.3.1 Solving Sub-Problem Inexactly While the analysis in Section 4.2 assumes exact solution of subproblems, in practice, the Block Minimization framework does not require solving subproblem (11), (14) exactly. In our experiments, it suffices for the fast convergence of proximal-point update (11) to solve subproblem (14) for only a single pass of all blocks of variables αB1,..., αBK, and limit the number of iterations the designated solver spends on each subproblem (7), (14) to be no more than some parameter Tmax. 4.3.2 Random Selection w/o Replacement In Algorithm 1 and 2, the block to be optimized is chosen uniformly at random from k ∈{1, ..., K}, which eases the analysis for proving a better convergence rate [13]. However, in practice, to avoid unbalanced update frequency among blocks, we do random sampling without replacement for both Algorithm 1 and 2, that is, for every K iterations, we generate a random permutation π1, ..., πK of block index 1, .., K and optimize block subproblems (7), (14) according to the order π1, .., πK. This also eases the checking of inner-loop stopping condition. 4.3.3 Storage of Dual Variables Both the algorithms 1 and 2 need to store the dual variables αBk into memory and load/save them from/to some secondary storage units, which requires a time linear to p|Bk|. For some problems, such as multi-label classification with large number of labels or structured prediction with large number of factors, this can be very expensive. In this situation, one can instead maintain µ ¯ Bk = P n∈Bk ΦT nαn = µ −µBk directly. Note µ ¯ Bk has I/O and storage cost linear to d, which can be much smaller than p|Bk| in a low-dimensional problem. 5 Experiment In this section, we compare the proposed Dual Augmented Block Minimization framework (Algorithm 2) to the vanilla Dual Block Coordinate Descent algorithm [26] and methods adopted from Online and Distributed Learning. The experiments are conducted on the problem of L1-regularized L2-loss SVM [27] and the (Lasso) (L1-regularized Regression) problem [17] in the limited-memory setting with data size 10 times larger than the available memory. For both problems, we use stateof-the-art randomized coordinate descent method [13, 27] as the solver for solving sub-problems (7), (14), (59), (63), and we set parameter ηt = 1, λ = 1 (of L1-regularizer) for all experiments. Four public benchmark data sets are used— webspam, rcv1-binary for classification and year-pred, E2006 for regression, which can be obtained from the LIBSVM data set collections. For year-pred and E2006, the features are generated from Random Fourier Features [12, 23] that approximate the effect of Gaussian RBF kernel. Table 1 summarizes the data statistics. The algorithms in comparison and their shorthands are listed below, where all solvers are implemented in C/C++ and run on 64-bit machine with 2.83GHz Intel(R) Xeon(R) CPU. We constrained the process to use no more than 1/10 of memory required to store the whole data. • OnlineMD: Stochastic Mirror Descent method specially designed for L1-regularized problem proposed in [15] with step size chosen from 10−2-102 for best performance. 7 Table 1: Data Statistics: Summary of data statistics when stored using sparse format. The last two columns specify memory consumption in (MB) of the whole data and that of a block when data is split into K = 10 partitions. Data #train #test dimension #non-zeros Memory Block webspam 315,000 31,500 680,714 1,174,704,031 20,679 2,068 rcv1 202,420 20,242 7,951,176 656,977,694 12,009 1,201 year-pred 463,715 51,630 2,000 927,893,715 13,702 1,370 E2006 16,087 3,308 30,000 8,088,636 8,088 809 Figure 1: Relative function value difference to the optimum and Testing RMSE (Accuracy) on LASSO (top) and L1-regularized L2-SVM (bottom). (RMSE best for year-pred: 9.1320; for E2006: 0.4430), (Accuracy best for for webspam: 0.4761%; best for rcv1: 2.213%). 1000 2000 3000 4000 5000 6000 10 −3 10 −2 time objective year−pred−obj ADMM BC−ADMM DA−BCD D−BCD onlineMD 1000 2000 3000 4000 5000 6000 10 −2 time rmse year−rmse ADMM BC−ADMM DA−BCD D−BCD onlineMD 1000 2000 3000 4000 5000 6000 10 −3 10 −2 10 −1 10 0 time obj
e2006−obj ADMM BC−ADMM DA−BCD D−BCD onlineMD 1000 2000 3000 4000 5000 6000 10 −1 10 0 time RMSE
e2006−rmse ADMM BC−ADMM DA−BCD D−BCD onlineMD 2000 4000 6000 8000 10000 12000 10 −2 10 −1 10 0 10 1 time obj
webspam−obj ADMM BC−ADMM DA−BCD D−BCD onlineMD 2000 4000 6000 8000 10000 12000 10 −1 10 0 10 1 time error
webspam−error ADMM BC−ADMM DA−BCD D−BCD onlineMD 2000 4000 6000 8000 10000 12000 10 −2 10 −1 10 0 10 1 time obj
rcv1−obj ADMM BC−ADMM DA−BCD D−BCD onlineMD 2000 4000 6000 8000 10000 12000 10 0 time error
rcv1−error ADMM BC−ADMM DA−BCD D−BCD onlineMD • D-BCD2: Dual Block-Coordinate Descent method (Algorithm 1). • DA-BCD: Dual-Augmented Block Minimization (Algorithm 2). • ADMM: ADMM for limited-memory learning (Algorithm 3 in appendix-B). • BC-ADMM: Block-Coordinate ADMM that updates a randomly chosen block of dual variables at a time for limited-memory learning (Algorithm 4 in appendix-B) . We use wall clock time that includes both I/O and computation as measure for training time in all experiments. In Figure 5, three measures are plotted versus the training time: Relative objective function difference to the optimum, Testing RMSE and Accuracy. Figure 5 shows the results, where as expected, the dual Block Coordinate Descent (D-BCD) method without augmentation cannot improve the objective after certain number of iterations. However, with extremely simple modification, the Dual-Augmented Block Minimization (DA-BCD) algorithm becomes not only globally convergent but with a rate several times faster than other approaches. Among all methods, the convergence of Online Mirror Descent (SMIDAS) is significantly slower, which is expected since (i) the online Mirror Descent on a non-smooth, non-strongly convex function converges at a rate qualitatively slower than the linear convergence rate of DA-BCD and ADMM [15, 16], and (ii) Online method does not utilize the available memory capacity and thus spends unbalanced time on I/O and computation. For methods adopted from distributed optimization, the experiment shows BC-ADMM consistently, but only slightly, improves ADMM, and both of them converge much slower than the DA-BCD approach, presumably due to the conservative updates on the dual variables. Acknowledgement We thank to the support of Telecommunication Lab., Chunghwa Telecom Co., Ltd via TL-103-8201, AOARD via No. FA2386-13-1-4045, Ministry of Science and Technology, National Taiwan University and Intel Co. via MOST102-2911-I-002-001, NTU103R7501, 1022923-E-002-007-MY2, 102-2221-E-002-170, 103-2221-E-002-104-MY2. 2The objective value obtained from D-BCD fluctuates a lot, in figures we plot the lowest values achieved by D-BCD from the beginning to time t. 8 References [1] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 2011. [2] K. Chang and D. Roth. Selective block minimization for faster convergence of limited memory large-scale linear models. In SIGKDD. ACM, 2011. [3] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. F. Fei. Imagenet: A large-scale hierarchical image database. In CVPR, 2009. [4] A. Hoffman. On approximate solutions of systems of linear inequalities. Journal of Research of the National Bureau of Standards, 1952. [5] M. Hong and Z. Luo. On the linear convergence of the alternating direction method of multipliers, 2012. [6] C. Hsieh, I. Dhillon, P. Ravikumar, S. Becker, and P. Olsen. Quic & dirty: A quadratic approximation approach for dirty statistical models. In NIPS, 2014. [7] M. Jaggi, V. Smith, M. Tak´ac, J. Terhorst, S. Krishnan, T. Hofmann, and M. Jordan. Communicationefficient distributed dual coordinate ascent. In NIPS, 2014. [8] T. Joachims. A support vector method for multivariate performance measures. In ICML, 2005. [9] S. Kakade, S. Shalev-Shwartz, and A. Tewari. Applications of strong convexity–strong smoothness duality to learning with matrices. CoRR, 2009. [10] C. Ma, V. Smith, M. Jaggi, M. Jordan, P. Richt´arik, and M. Tak´aˇc. Adding vs. averaging in distributed primal-dual optimization. ICML, 2015. [11] G. Obozinski, L. Jacob, and J. Vert. Group lasso with overlaps: the latent group lasso approach. arXiv preprint, 2011. [12] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007. [13] P. Richt´arik and M. Tak´aˇc. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 2014. [14] S. Shalev-Shwartz, Y. Singer, N. Srebro, and A. Cotter. Pegasos: Primal estimated sub-gradient solver for svm. Mathematical programming, 2011. [15] S. Shalev-Shwartz and A. Tewari. Stochastic methods for l1-regularized loss minimization. JMLR, 2011. [16] N. Srebro, K. Sridharan, and A. Tewari. On the universality of online mirror descent. In NIPS, 2011. [17] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, 1996. [18] R. Tomioka, T. Suzuki, and M. Sugiyama. Super-linear convergence of dual augmented lagrangian algorithm for sparsity regularized estimation. JMLR, 2011. [19] I. Trofimov and A. Genkin. Distributed coordinate descent for l1-regularized logistic regression. arXiv preprint, 2014. [20] P. Wang and C. Lin. Iteration complexity of feasible descent methods for convex optimization. JMLR, 2014. [21] I. Yen, C. Chang, T. Lin, S., and S. Lin. Indexed block coordinate descent for large-scale linear classification with limited memory. In SIGKDD. ACM, 2013. [22] I. Yen, C. Hsieh, P. Ravikumar, and I. Dhillon. Constant nullspace strong convexity and fast convergence of proximal methods under high-dimensional settings. In NIPS, 2014. [23] I. Yen, T. Lin, S. Lin, P. Ravikumar, and I. Dhillon. Sparse random feature algorithm as coordinate descent in hilbert space. In NIPS, 2014. [24] I. Yen, X. Lin, K. Zhong, P. Ravikumar, and I. Dhillon. A convex exemplar-based approach to MADBayes dirichlet process mixture models. In ICML, 2015. [25] I. Yen, K. Zhong, C. Hsieh, P. Ravikumar, and I. Dhillon. Sparse linear programming via primal and dual augmented coordinate descent. In NIPS, 2015. [26] H. Yu, C. Hsieh, . Chang, and C. Lin. Large linear classification when data cannot fit in memory. SIGKDD, 2010. [27] G. Yuan, K. Chang, C. Hsieh, and C. Lin. A comparison of optimization methods and software for large-scale L1-regularized linear classification. JMLR, 2010. [28] K. Zhong, I. Yen, I. Dhillon, and P. Ravikumar. Proximal quasi-Newton for computationally intensive l1-regularized m-estimators. In NIPS, 2014. 9 | 2015 | 393 |
5,917 | Convolutional Networks on Graphs for Learning Molecular Fingerprints David Duvenaud†, Dougal Maclaurin†, Jorge Aguilera-Iparraguirre Rafael G´omez-Bombarelli, Timothy Hirzel, Al´an Aspuru-Guzik, Ryan P. Adams Harvard University Abstract We introduce a convolutional neural network that operates directly on graphs. These networks allow end-to-end learning of prediction pipelines whose inputs are graphs of arbitrary size and shape. The architecture we present generalizes standard molecular feature extraction methods based on circular fingerprints. We show that these data-driven features are more interpretable, and have better predictive performance on a variety of tasks. 1 Introduction Recent work in materials design used neural networks to predict the properties of novel molecules by generalizing from examples. One difficulty with this task is that the input to the predictor, a molecule, can be of arbitrary size and shape. Currently, most machine learning pipelines can only handle inputs of a fixed size. The current state of the art is to use off-the-shelf fingerprint software to compute fixed-dimensional feature vectors, and use those features as inputs to a fully-connected deep neural network or other standard machine learning method. This formula was followed by [28, 3, 19]. During training, the molecular fingerprint vectors were treated as fixed. In this paper, we replace the bottom layer of this stack – the function that computes molecular fingerprint vectors – with a differentiable neural network whose input is a graph representing the original molecule. In this graph, vertices represent individual atoms and edges represent bonds. The lower layers of this network is convolutional in the sense that the same local filter is applied to each atom and its neighborhood. After several such layers, a global pooling step combines features from all the atoms in the molecule. These neural graph fingerprints offer several advantages over fixed fingerprints: • Predictive performance. By using data adapting to the task at hand, machine-optimized fingerprints can provide substantially better predictive performance than fixed fingerprints. We show that neural graph fingerprints match or beat the predictive performance of standard fingerprints on solubility, drug efficacy, and organic photovoltaic efficiency datasets. • Parsimony. Fixed fingerprints must be extremely large to encode all possible substructures without overlap. For example, [28] used a fingerprint vector of size 43,000, after having removed rarely-occurring features. Differentiable fingerprints can be optimized to encode only relevant features, reducing downstream computation and regularization requirements. • Interpretability. Standard fingerprints encode each possible fragment completely distinctly, with no notion of similarity between fragments. In contrast, each feature of a neural graph fingerprint can be activated by similar but distinct molecular fragments, making the feature representation more meaningful. †Equal contribution. 1 Figure 1: Left: A visual representation of the computational graph of both standard circular fingerprints and neural graph fingerprints. First, a graph is constructed matching the topology of the molecule being fingerprinted, in which nodes represent atoms, and edges represent bonds. At each layer, information flows between neighbors in the graph. Finally, each node in the graph turns on one bit in the fixed-length fingerprint vector. Right: A more detailed sketch including the bond information used in each operation. 2 Circular fingerprints The state of the art in molecular fingerprints are extended-connectivity circular fingerprints (ECFP) [21]. Circular fingerprints [6] are a refinement of the Morgan algorithm [17], designed to encode which substructures are present in a molecule in a way that is invariant to atom-relabeling. Circular fingerprints generate each layer’s features by applying a fixed hash function to the concatenated features of the neighborhood in the previous layer. The results of these hashes are then treated as integer indices, where a 1 is written to the fingerprint vector at the index given by the feature vector at each node in the graph. Figure 1(left) shows a sketch of this computational architecture. Ignoring collisions, each index of the fingerprint denotes the presence of a particular substructure. The size of the substructures represented by each index depends on the depth of the network. Thus the number of layers is referred to as the ‘radius’ of the fingerprints. Circular fingerprints are analogous to convolutional networks in that they apply the same operation locally everywhere, and combine information in a global pooling step. 3 Creating a differentiable fingerprint The space of possible network architectures is large. In the spirit of starting from a known-good configuration, we designed a differentiable generalization of circular fingerprints. This section describes our replacement of each discrete operation in circular fingerprints with a differentiable analog. Hashing The purpose of the hash functions applied at each layer of circular fingerprints is to combine information about each atom and its neighboring substructures. This ensures that any change in a fragment, no matter how small, will lead to a different fingerprint index being activated. We replace the hash operation with a single layer of a neural network. Using a smooth function allows the activations to be similar when the local molecular structure varies in unimportant ways. Indexing Circular fingerprints use an indexing operation to combine all the nodes’ feature vectors into a single fingerprint of the whole molecule. Each node sets a single bit of the fingerprint to one, at an index determined by the hash of its feature vector. This pooling-like operation converts an arbitrary-sized graph into a fixed-sized vector. For small molecules and a large fingerprint length, the fingerprints are always sparse. We use the softmax operation as a differentiable analog of indexing. In essence, each atom is asked to classify itself as belonging to a single category. The sum of all these classification label vectors produces the final fingerprint. This operation is analogous to the pooling operation in standard convolutional neural networks. 2 Algorithm 1 Circular fingerprints 1: Input: molecule, radius R, fingerprint length S 2: Initialize: fingerprint vector f ←0S 3: for each atom a in molecule 4: ra ←g(a) ▷lookup atom features 5: for L = 1 to R ▷for each layer 6: for each atom a in molecule 7: r1 . . . rN = neighbors(a) 8: v ←[ra, r1, . . . , rN] ▷concatenate 9: ra ←hash(v) ▷hash function 10: i ←mod(ra, S) ▷convert to index 11: fi ←1 ▷Write 1 at index 12: Return: binary vector f Algorithm 2 Neural graph fingerprints 1: Input: molecule, radius R, hidden weights H1 1 . . . H5 R, output weights W1 . . . WR 2: Initialize: fingerprint vector f ←0S 3: for each atom a in molecule 4: ra ←g(a) ▷lookup atom features 5: for L = 1 to R ▷for each layer 6: for each atom a in molecule 7: r1 . . . rN = neighbors(a) 8: v ←ra + PN i=1 ri ▷sum 9: ra ←σ(vHN L ) ▷smooth function 10: i ←softmax(raWL) ▷sparsify 11: f ←f + i ▷add to fingerprint 12: Return: real-valued vector f Figure 2: Pseudocode of circular fingerprints (left) and neural graph fingerprints (right). Differences are highlighted in blue. Every non-differentiable operation is replaced with a differentiable analog. Canonicalization Circular fingerprints are identical regardless of the ordering of atoms in each neighborhood. This invariance is achieved by sorting the neighboring atoms according to their features, and bond features. We experimented with this sorting scheme, and also with applying the local feature transform on all possible permutations of the local neighborhood. An alternative to canonicalization is to apply a permutation-invariant function, such as summation. In the interests of simplicity and scalability, we chose summation. Circular fingerprints can be interpreted as a special case of neural graph fingerprints having large random weights. This is because, in the limit of large input weights, tanh nonlinearities approach step functions, which when concatenated form a simple hash function. Also, in the limit of large input weights, the softmax operator approaches a one-hot-coded argmax operator, which is analogous to an indexing operation. Algorithms 1 and 2 summarize these two algorithms and highlight their differences. Given a fingerprint length L, and F features at each layer, the parameters of neural graph fingerprints consist of a separate output weight matrix of size F × L for each layer, as well as a set of hidden-to-hidden weight matrices of size F × F at each layer, one for each possible number of bonds an atom can have (up to 5 in organic molecules). 4 Experiments We ran two experiments to demonstrate that neural fingerprints with large random weights behave similarly to circular fingerprints. First, we examined whether distances between circular fingerprints were similar to distances between neural fingerprint-based distances. Figure 3 (left) shows a scatterplot of pairwise distances between circular vs. neural fingerprints. Fingerprints had length 2048, and were calculated on pairs of molecules from the solubility dataset [4]. Distance was measured using a continuous generalization of the Tanimoto (a.k.a. Jaccard) similarity measure, given by distance(x, y) = 1 − X min(xi, yi) . X max(xi, yi) (1) There is a correlation of r = 0.823 between the distances. The line of points on the right of the plot shows that for some pairs of molecules, binary ECFP fingerprints have exactly zero overlap. Second, we examined the predictive performance of neural fingerprints with large random weights vs. that of circular fingerprints. Figure 3 (right) shows average predictive performance on the solubility dataset, using linear regression on top of fingerprints. The performances of both methods follow similar curves. In contrast, the performance of neural fingerprints with small random weights follows a different curve, and is substantially better. This suggests that even with random weights, the relatively smooth activation of neural fingerprints helps generalization performance. 3 0.5 0.6 0.7 0.8 0.9 1.0 Circular fingerprint distances 0.5 0.6 0.7 0.8 0.9 1.0 Neural fingerprint distances Neural vs Circular distances, r =0:823 0 1 2 3 4 5 6 Fingerprint radius 0.8 1.0 1.2 1.4 1.6 1.8 2.0 RMSE (log Mol/L) Circular fingerprints Random conv with large parameters Random conv with small parameters Figure 3: Left: Comparison of pairwise distances between molecules, measured using circular fingerprints and neural graph fingerprints with large random weights. Right: Predictive performance of circular fingerprints (red), neural graph fingerprints with fixed large random weights (green) and neural graph fingerprints with fixed small random weights (blue). The performance of neural graph fingerprints with large random weights closely matches the performance of circular fingerprints. 4.1 Examining learned features To demonstrate that neural graph fingerprints are interpretable, we show substructures which most activate individual features in a fingerprint vector. Each feature of a circular fingerprint vector can each only be activated by a single fragment of a single radius, except for accidental collisions. In contrast, neural graph fingerprint features can be activated by variations of the same structure, making them more interpretable, and allowing shorter feature vectors. Solubility features Figure 4 shows the fragments that maximally activate the most predictive features of a fingerprint. The fingerprint network was trained as inputs to a linear model predicting solubility, as measured in [4]. The feature shown in the top row has a positive predictive relationship with solubility, and is most activated by fragments containing a hydrophilic R-OH group, a standard indicator of solubility. The feature shown in the bottom row, strongly predictive of insolubility, is activated by non-polar repeated ring structures. Fragments most activated by pro-solubility feature O OH O NH O OH OH Fragments most activated by anti-solubility feature Figure 4: Examining fingerprints optimized for predicting solubility. Shown here are representative examples of molecular fragments (highlighted in blue) which most activate different features of the fingerprint. Top row: The feature most predictive of solubility. Bottom row: The feature most predictive of insolubility. 4 Toxicity features We trained the same model architecture to predict toxicity, as measured in two different datasets in [26]. Figure 5 shows fragments which maximally activate the feature most predictive of toxicity, in two separate datasets. Fragments most activated by toxicity feature on SR-MMP dataset Fragments most activated by toxicity feature on NR-AHR dataset Figure 5: Visualizing fingerprints optimized for predicting toxicity. Shown here are representative samples of molecular fragments (highlighted in red) which most activate the feature most predictive of toxicity. Top row: the most predictive feature identifies groups containing a sulphur atom attached to an aromatic ring. Bottom row: the most predictive feature identifies fused aromatic rings, also known as polycyclic aromatic hydrocarbons, a well-known carcinogen. [27] constructed similar visualizations, but in a semi-manual way: to determine which toxic fragments activated a given neuron, they searched over a hand-made list of toxic substructures and chose the one most correlated with a given neuron. In contrast, our visualizations are generated automatically, without the need to restrict the range of possible answers beforehand. 4.2 Predictive Performance We ran several experiments to compare the predictive performance of neural graph fingerprints to that of the standard state-of-the-art setup: circular fingerprints fed into a fully-connected neural network. Experimental setup Our pipeline takes as input the SMILES [30] string encoding of each molecule, which is then converted into a graph using RDKit [20]. We also used RDKit to produce the extended circular fingerprints used in the baseline. Hydrogen atoms were treated implicitly. In our convolutional networks, the initial atom and bond features were chosen to be similar to those used by ECFP: Initial atom features concatenated a one-hot encoding of the atom’s element, its degree, the number of attached hydrogen atoms, and the implicit valence, and an aromaticity indicator. The bond features were a concatenation of whether the bond type was single, double, triple, or aromatic, whether the bond was conjugated, and whether the bond was part of a ring. Training and Architecture Training used batch normalization [11]. We also experimented with tanh vs relu activation functions for both the neural fingerprint network layers and the fullyconnected network layers. relu had a slight but consistent performance advantage on the validation set. We also experimented with dropconnect [29], a variant of dropout in which weights are randomly set to zero instead of hidden units, but found that it led to worse validation error in general. Each experiment optimized for 10000 minibatches of size 100 using the Adam algorithm [13], a variant of RMSprop that includes momentum. Hyperparameter Optimization To optimize hyperparameters, we used random search. The hyperparameters of all methods were optimized using 50 trials for each cross-validation fold. The following hyperparameters were optimized: log learning rate, log of the initial weight scale, the log L2 penalty, fingerprint length, fingerprint depth (up to 6), and the size of the hidden layer in the fully-connected network. Additionally, the size of the hidden feature vector in the convolutional neural fingerprint networks was optimized. 5 Dataset Solubility [4] Drug efficacy [5] Photovoltaic efficiency [8] Units log Mol/L EC50 in nM percent Predict mean 4.29 ± 0.40 1.47 ± 0.07 6.40 ± 0.09 Circular FPs + linear layer 1.71 ± 0.13 1.13 ± 0.03 2.63 ± 0.09 Circular FPs + neural net 1.40 ± 0.13 1.36 ± 0.10 2.00 ± 0.09 Neural FPs + linear layer 0.77 ± 0.11 1.15 ± 0.02 2.58 ± 0.18 Neural FPs + neural net 0.52 ± 0.07 1.16 ± 0.03 1.43 ± 0.09 Table 1: Mean predictive accuracy of neural fingerprints compared to standard circular fingerprints. Datasets We compared the performance of standard circular fingerprints against neural graph fingerprints on a variety of domains: • Solubility: The aqueous solubility of 1144 molecules as measured by [4]. • Drug efficacy: The half-maximal effective concentration (EC50) in vitro of 10,000 molecules against a sulfide-resistant strain of P. falciparum, the parasite that causes malaria, as measured by [5]. • Organic photovoltaic efficiency: The Harvard Clean Energy Project [8] uses expensive DFT simulations to estimate the photovoltaic efficiency of organic molecules. We used a subset of 20,000 molecules from this dataset. Predictive accuracy We compared the performance of circular fingerprints and neural graph fingerprints under two conditions: In the first condition, predictions were made by a linear layer using the fingerprints as input. In the second condition, predictions were made by a one-hidden-layer neural network using the fingerprints as input. In all settings, all differentiable parameters in the composed models were optimized simultaneously. Results are summarized in Table 4.2. In all experiments, the neural graph fingerprints matched or beat the accuracy of circular fingerprints, and the methods with a neural network on top of the fingerprints typically outperformed the linear layers. Software Automatic differentiation (AD) software packages such as Theano [1] significantly speed up development time by providing gradients automatically, but can only handle limited control structures and indexing. Since we required relatively complex control flow and indexing in order to implement variants of Algorithm 2, we used a more flexible automatic differentiation package for Python called Autograd (github.com/HIPS/autograd). This package handles standard Numpy [18] code, and can differentiate code containing while loops, branches, and indexing. Code for computing neural fingerprints and producing visualizations is available at github.com/HIPS/neural-fingerprint. 5 Limitations Computational cost Neural fingerprints have the same asymptotic complexity in the number of atoms and the depth of the network as circular fingerprints, but have additional terms due to the matrix multiplies necessary to transform the feature vector at each step. To be precise, computing the neural fingerprint of depth R, fingerprint length L of a molecule with N atoms using a molecular convolutional net having F features at each layer costs O(RNFL + RNF 2). In practice, training neural networks on top of circular fingerprints usually took several minutes, while training both the fingerprints and the network on top took on the order of an hour on the larger datasets. Limited computation at each layer How complicated should we make the function that goes from one layer of the network to the next? In this paper we chose the simplest feasible architecture: a single layer of a neural network. However, it may be fruitful to apply multiple layers of nonlinearities between each message-passing step (as in [22]), or to make information preservation easier by adapting the Long Short-Term Memory [10] architecture to pass information upwards. 6 Limited information propagation across the graph The local message-passing architecture developed in this paper scales well in the size of the graph (due to the low degree of organic molecules), but its ability to propagate information across the graph is limited by the depth of the network. This may be appropriate for small graphs such as those representing the small organic molecules used in this paper. However, in the worst case, it can take a depth N 2 network to distinguish between graphs of size N. To avoid this problem, [2] proposed a hierarchical clustering of graph substructures. A tree-structured network could examine the structure of the entire graph using only log(N) layers, but would require learning to parse molecules. Techniques from natural language processing [25] might be fruitfully adapted to this domain. Inability to distinguish stereoisomers Special bookkeeping is required to distinguish between stereoisomers, including enantomers (mirror images of molecules) and cis/trans isomers (rotation around double bonds). Most circular fingerprint implementations have the option to make these distinctions. Neural fingerprints could be extended to be sensitive to stereoisomers, but this remains a task for future work. 6 Related work This work is similar in spirit to the neural Turing machine [7], in the sense that we take an existing discrete computational architecture, and make each part differentiable in order to do gradient-based optimization. Neural nets for quantitative structure-activity relationship (QSAR) The modern standard for predicting properties of novel molecules is to compose circular fingerprints with fully-connected neural networks or other regression methods. [3] used circular fingerprints as inputs to an ensemble of neural networks, Gaussian processes, and random forests. [19] used circular fingerprints (of depth 2) as inputs to a multitask neural network, showing that multiple tasks helped performance. Neural graph fingerprints The most closely related work is [15], who build a neural network having graph-valued inputs. Their approach is to remove all cycles and build the graph into a tree structure, choosing one atom to be the root. A recursive neural network [23, 24] is then run from the leaves to the root to produce a fixed-size representation. Because a graph having N nodes has N possible roots, all N possible graphs are constructed. The final descriptor is a sum of the representations computed by all distinct graphs. There are as many distinct graphs as there are atoms in the network. The computational cost of this method thus grows as O(F 2N 2), where F is the size of the feature vector and N is the number of atoms, making it less suitable for large molecules. Convolutional neural networks Convolutional neural networks have been used to model images, speech, and time series [14]. However, standard convolutional architectures use a fixed computational graph, making them difficult to apply to objects of varying size or structure, such as molecules. More recently, [12] and others have developed a convolutional neural network architecture for modeling sentences of varying length. Neural networks on fixed graphs [2] introduce convolutional networks on graphs in the regime where the graph structure is fixed, and each training example differs only in having different features at the vertices of the same graph. In contrast, our networks address the situation where each training input is a different graph. Neural networks on input-dependent graphs [22] propose a neural network model for graphs having an interesting training procedure. The forward pass consists of running a message-passing scheme to equilibrium, a fact which allows the reverse-mode gradient to be computed without storing the entire forward computation. They apply their network to predicting mutagenesis of molecular compounds as well as web page rankings. [16] also propose a neural network model for graphs with a learning scheme whose inner loop optimizes not the training loss, but rather the correlation between each newly-proposed vector and the training error residual. They apply their model to a dataset of boiling points of 150 molecular compounds. Our paper builds on these ideas, with the 7 following differences: Our method replaces their complex training algorithms with simple gradientbased optimization, generalizes existing circular fingerprint computations, and applies these networks in the context of modern QSAR pipelines which use neural networks on top of the fingerprints to increase model capacity. Unrolled inference algorithms [9] and others have noted that iterative inference procedures sometimes resemble the feedforward computation of a recurrent neural network. One natural extension of these ideas is to parameterize each inference step, and train a neural network to approximately match the output of exact inference using only a small number of iterations. The neural fingerprint, when viewed in this light, resembles an unrolled message-passing algorithm on the original graph. 7 Conclusion We generalized existing hand-crafted molecular features to allow their optimization for diverse tasks. By making each operation in the feature pipeline differentiable, we can use standard neural-network training methods to scalably optimize the parameters of these neural molecular fingerprints end-toend. We demonstrated the interpretability and predictive performance of these new fingerprints. Data-driven features have already replaced hand-crafted features in speech recognition, machine vision, and natural-language processing. Carrying out the same task for virtual screening, drug design, and materials design is a natural next step. Acknowledgments We thank Edward Pyzer-Knapp, Jennifer Wei, and Samsung Advanced Institute of Technology for their support. This work was partially funded by NSF IIS-1421780. References [1] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. [2] Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013. [3] George E. Dahl, Navdeep Jaitly, and Ruslan Salakhutdinov. Multi-task neural networks for QSAR predictions. arXiv preprint arXiv:1406.1231, 2014. [4] John S. Delaney. ESOL: Estimating aqueous solubility directly from molecular structure. Journal of Chemical Information and Computer Sciences, 44(3):1000–1005, 2004. [5] Francisco-Javier Gamo, Laura M Sanz, Jaume Vidal, Cristina de Cozar, Emilio Alvarez, Jose-Luis Lavandera, Dana E Vanderwall, Darren VS Green, Vinod Kumar, Samiul Hasan, et al. Thousands of chemical starting points for antimalarial lead identification. Nature, 465(7296):305–310, 2010. [6] Robert C. Glem, Andreas Bender, Catrin H. Arnby, Lars Carlsson, Scott Boyer, and James Smith. Circular fingerprints: flexible molecular descriptors with applications from physical chemistry to ADME. IDrugs: the investigational drugs journal, 9(3):199–204, 2006. [7] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural Turing machines. arXiv preprint arXiv:1410.5401, 2014. [8] Johannes Hachmann, Roberto Olivares-Amaya, Sule Atahan-Evrenk, Carlos Amador-Bedolla, Roel S S´anchez-Carrera, Aryeh Gold-Parker, Leslie Vogt, Anna M Brockway, and Al´an Aspuru-Guzik. The Harvard clean energy project: large-scale computational screening and design of organic photovoltaics on the world community grid. The Journal of Physical Chemistry Letters, 2(17):2241–2251, 2011. [9] John R Hershey, Jonathan Le Roux, and Felix Weninger. Deep unfolding: Model-based inspiration of novel deep architectures. arXiv preprint arXiv:1409.2574, 2014. 8 [10] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [11] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167, 2015. [12] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A convolutional neural network for modelling sentences. Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, June 2014. [13] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. [14] Yann LeCun and Yoshua Bengio. Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks, 3361, 1995. [15] Alessandro Lusci, Gianluca Pollastri, and Pierre Baldi. Deep architectures and deep learning in chemoinformatics: the prediction of aqueous solubility for drug-like molecules. Journal of chemical information and modeling, 53(7):1563–1575, 2013. [16] Alessio Micheli. Neural network for graphs: A contextual constructive approach. Neural Networks, IEEE Transactions on, 20(3):498–511, 2009. [17] H.L. Morgan. The generation of a unique machine description for chemical structure. Journal of Chemical Documentation, 5(2):107–113, 1965. [18] Travis E Oliphant. Python for scientific computing. Computing in Science & Engineering, 9(3):10–20, 2007. [19] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. Massively multitask networks for drug discovery. arXiv:1502.02072, 2015. [20] RDKit: Open-source cheminformatics. www.rdkit.org. [accessed 11-April-2013]. [21] David Rogers and Mathew Hahn. Extended-connectivity fingerprints. Journal of Chemical Information and Modeling, 50(5):742–754, 2010. [22] F. Scarselli, M. Gori, Ah Chung Tsoi, M. Hagenbuchner, and G. Monfardini. The graph neural network model. Neural Networks, IEEE Transactions on, 20(1):61–80, Jan 2009. [23] Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801–809, 2011. [24] Richard Socher, Jeffrey Pennington, Eric H Huang, Andrew Y Ng, and Christopher D Manning. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 151–161. Association for Computational Linguistics, 2011. [25] Kai Sheng Tai, Richard Socher, and Christopher D Manning. Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075, 2015. [26] Tox21 Challenge. National center for advancing translational sciences. http://tripod. nih.gov/tox21/challenge, 2014. [Online; accessed 2-June-2015]. [27] Thomas Unterthiner, Andreas Mayr, G¨unter Klambauer, and Sepp Hochreiter. Toxicity prediction using deep learning. arXiv preprint arXiv:1503.01445, 2015. [28] Thomas Unterthiner, Andreas Mayr, G ¨unter Klambauer, Marvin Steijaert, J¨org Wenger, Hugo Ceulemans, and Sepp Hochreiter. Deep learning as an opportunity in virtual screening. In Advances in Neural Information Processing Systems, 2014. [29] Li Wan, Matthew Zeiler, Sixin Zhang, Yann L. Cun, and Rob Fergus. Regularization of neural networks using dropconnect. In International Conference on Machine Learning, 2013. [30] David Weininger. SMILES, a chemical language and information system. Journal of chemical information and computer sciences, 28(1):31–36, 1988. 9 | 2015 | 394 |
5,918 | Decomposition Bounds for Marginal MAP Wei Ping∗ Qiang Liu† Alexander Ihler∗ ∗Computer Science, UC Irvine †Computer Science, Dartmouth College {wping,ihler}@ics.uci.edu qliu@cs.dartmouth.edu Abstract Marginal MAP inference involves making MAP predictions in systems defined with latent variables or missing information. It is significantly more difficult than pure marginalization and MAP tasks, for which a large class of efficient and convergent variational algorithms, such as dual decomposition, exist. In this work, we generalize dual decomposition to a generic power sum inference task, which includes marginal MAP, along with pure marginalization and MAP, as special cases. Our method is based on a block coordinate descent algorithm on a new convex decomposition bound, that is guaranteed to converge monotonically, and can be parallelized efficiently. We demonstrate our approach on marginal MAP queries defined on real-world problems from the UAI approximate inference challenge, showing that our framework is faster and more reliable than previous methods. 1 Introduction Probabilistic graphical models such as Bayesian networks and Markov random fields provide a useful framework and powerful tools for machine learning. Given a graphical model, inference refers to answering probabilistic queries about the model. There are three common types of inference tasks. The first are max-inference or maximum a posteriori (MAP) tasks, which aim to find the most probable state of the joint probability; exact and approximate MAP inference is widely used in structured prediction [26]. Sum-inference tasks include calculating marginal probabilities and the normalization constant of the distribution, and play a central role in many learning tasks (e.g., maximum likelihood). Finally, marginal MAP tasks are “mixed” inference problems, which generalize the first two types by marginalizing a subset of variables (e.g., hidden variables) before optimizing over the remainder.1 These tasks arise in latent variable models [e.g., 29, 25] and many decision-making problems [e.g., 13]. All three inference types are generally intractable; as a result, approximate inference, particularly convex relaxations or upper bounding methods, are of great interest. Decomposition methods provide a useful and computationally efficient class of bounds on inference problems. For example, dual decomposition methods for MAP [e.g., 31] give a class of easy-toevaluate upper bounds which can be directly optimized using coordinate descent [36, 6], subgradient updates [14], or other methods [e.g., 22]. It is easy to ensure both convergence, and that the objective is monotonically decreasing (so that more computation always provides a better bound). The resulting bounds can be used either as stand-alone approximation methods [6, 14], or as a component of search [11]. In summation problems, a notable decomposition bound is tree-reweighted BP (TRW), which bounds the partition function with a combination of trees [e.g., 34, 21, 12, 3]. These bounds are useful in joint inference and learning (or “inferning”) frameworks, allowing learning with approximate inference to be framed as a joint optimization over the model parameters and decomposition bound, often leading to more efficient learning [e.g., 23]. However, far fewer methods have been developed for marginal MAP problems. 1In some literature [e.g., 28], marginal MAP is simply called MAP, and the joint MAP task is called MPE. 1 In this work, we deveop a decomposition bound that has a number of desirable properties: (1) Generality: our bound is sufficiently general to be applied easily to marginal MAP. (2) Any-time: it yields a bound at any point during the optimization (not just at convergence), so it can be used in an anytime way. (3) Monotonic and convergent: more computational effort gives strictly tighter bounds; note that (2) and (3) are particularly important for high-width approximations, which are expensive to represent and update. (4) Allows optimization over all parameters, including the “weights”, or fractional counting numbers, of the approximation; these parameters often have a significant effect on the tightness of the resulting bound. (5) Compact representation: within a given class of bounds, using fewer parameters to express the bound reduces memory and typically speeds up optimization. We organize the rest of the paper as follows. Section 2 gives some background and notation, followed by connections to related work in Section 3. We derive our decomposed bound in Section 4, and present a (block) coordinate descent algorithm for monotonically tightening it in Section 5. We report experimental results in Section 6 and conclude the paper in Section 7. 2 Background Here, we review some background on graphical models and inference tasks. A Markov random field (MRF) on discrete random variables x = [x1, . . . , xn] ∈X n is a probability distribution, p(x; θ) = exp h X α∈F θα(xα) −Φ(θ) i ; Φ(θ) = log X x∈X n exp h X α∈F θα(xα) i , (1) where F is a set of subsets of the variables, each associated with a factor θα, and Φ(θ) is the log partition function. We associate an undirected graph G = (V, E) with p(x) by mapping each xi to a node i ∈V , and adding an edge ij ∈E iff there exists α ∈F such that {i, j} ⊆α. We say node i and j are neighbors if ij ∈E. Then, F is the subset of cliques (fully connected subgraphs) of G. The use and evaluation of a given MRF often involves different types of inference tasks. Marginalization, or sum-inference tasks perform a sum over the configurations to calculate the log partition function Φ in (1), marginal probabilities, or the probability of some observed evidence. On the other hand, the maximum a posteriori (MAP), or max-inference tasks perform joint maximization to find configurations with the highest probability, that is, Φ0(θ) = maxx P α∈F θα(xα). A generalization of max- and sum- inference is marginal MAP, or mixed-inference, in which we are interested in first marginalizing a subset A of variables (e.g., hidden variables), and then maximizing the remaining variables B (whose values are of direct interest), that is, ΦAB(θ) = max xB Q(xB) = max xB log X xA exp h X α∈F θα(xα) i , (2) where A ∪B = V (all the variables) and A ∩B = ∅. Obviously, both sum- and max- inference are special cases of marginal MAP when A = V and B = V , respectively. It will be useful to define an even more general inference task, based on a power sum operator: τi X xi f(xi) = X xi f(xi)1/τiτi, where f(xi) is any non-negative function and τi is a temperature or weight parameter. The power sum reduces to a standard sum when τi = 1, and approaches maxx f(x) when τi →0+, so that we define the power sum with τi = 0 to equal the max operator. The power sum is helpful for unifying max- and sum- inference [e.g., 35], as well as marginal MAP [15]. Specifically, we can apply power sums with different weights τi to each variable xi along a predefined elimination order (e.g., [x1, . . . , xn]), to define the weighted log partition function: Φτ(θ) = log τ X x exp(θ(x)) = log τn X xn . . . τ1 X x1 exp(θ(x)), (3) where we note that the value of (3) depends on the elimination order unless all the weights are equal. Obviously, (3) includes marginal MAP (2) as a special case by setting weights τA = 1 and τB = 0. This representation provides a useful tool for understanding and deriving new algorithms for general inference tasks, especially marginal MAP, for which relatively few efficient algorithms exist. 2 3 Related Work Variational upper bounds on MAP and the partition function, along with algorithms for providing fast, convergent optimization, have been widely studied in the last decade. In MAP, dual decomposition and linear programming methods have become a dominating approach, with numerous optimization techniques [36, 6, 32, 14, 37, 30, 22], and methods to tighten the approximations [33, 14]. For summation problems, most upper bounds are derived from the tree-reweighted (TRW) family of convex bounds [34], or more generally conditional entropy decompositions [5]. TRW bounds can be framed as optimizing over a convex combination of tree-structured models, or in a dual representation as a message-passing, TRW belief propagation algorithm. This illustrates a basic tension in the resulting bounds: in its primal form 2 (combination of trees), TRW is inefficient: it maintains a weight and O(|V |) parameters for each tree, and a large number of trees may be required to obtain a tight bound; this uses memory and makes optimization slow. On the other hand, the dual, or free energy, form uses only O(|E|) parameters (the TRW messages) to optimize over the set of all possible spanning trees – but, the resulting optimization is only guaranteed to be a bound at convergence, 3 making it difficult to use in an anytime fashion. Similarly, the gradient of the weights is only correct at convergence, making it difficult to optimize over these parameters; most implementations [e.g., 24] simply adopt fixed weights. Thus, most algorithms do not satisfy all the desirable properties listed in the introduction. For example, many works have developed convergent message-passing algorithms for convex free energies [e.g., 9, 10]. However, by optimizing the dual they do not provide a bound until convergence, and the representation and constraints on the counting numbers do not facilitate optimizing the bound over these parameters. To optimize counting numbers, [8] adopt a more restrictive free energy form requiring positive counting numbers on the entropies; but this cannot represent marginal MAP, whose free energy involves conditional entropies (equivalent to the difference between two entropy terms). On the other hand, working in the primal domain ensures a bound, but usually at the cost of enumerating a large number of trees. [12] heuristically select a small number of trees to avoid being too inefficient, while [21] focus on trying to speed up the updates on a given collection of trees. Another primal bound is weighted mini-bucket (WMB, [16]), which can represent a large collection of trees compactly and is easily applied to marginal MAP using the weighted log partition function viewpoint [15, 18]; however, existing optimization algorithms for WMB are non-monotonic, and often fail to converge, especially on marginal MAP tasks. While our focus is on variational bounds [16, 17], there are many non-variational approaches for marginal MAP as well. [27, 38] provide upper bounds on marginal MAP by reordering the order in which variables are eliminated, and using exact inference in the reordered join-tree; however, this is exponential in the size of the (unconstrained) treewidth, and can easily become intractable. [20] give an approximation closely related to mini-bucket [2] to bound the marginal MAP; however, unlike (weighted) mini-bucket, these bounds cannot be improved iteratively. The same is true for the algorithm of [19], which also has a strong dependence on treewidth. Other examples of marginal MAP algorithms include local search [e.g., 28] and Markov chain Monte Carlo methods [e.g., 4, 39]. 4 Fully Decomposed Upper Bound In this section, we develop a new general form of upper bound and provide an efficient, monotonically convergent optimization algorithm. Our new bound is based on fully decomposing the graph into disconnected cliques, allowing very efficient local computation, but can still be as tight as WMB or the TRW bound with a large collection of spanning trees once the weights and shifting variables are chosen or optimized properly. Our bound reduces to dual decomposition for MAP inference, but is applicable to more general mixed-inference settings. Our main result is based on the following generalization of the classical H¨older’s inequality [7]: 2Despite the term “dual decomposition” used in MAP tasks, in this work we refer to decomposition bounds as “primal” bounds, since they can be viewed as directly bounding the result of variable elimination. This is in contrast to, for example, the linear programming relaxation of MAP, which bounds the result only after optimization. 3See an example on Ising model in Supplement A. 3 Theorem 4.1. For a given graphical model p(x; θ) in (1) with cliques F = {α} and a set of nonnegative weights τ = {τi ≥0, i ∈V }, we define a set of “split weights” wα = {wα i ≥0, i ∈α} on each variable-clique pair (i, α), that satisfies P α|α∋i wα i = τi. Then we have τ X x Y α∈F exp θα(xα) ≤ Y α∈F wα X xα exp θα(xα) , (4) where the left-hand side is the powered-sum along order [x1, . . . , xn] as defined in (3), and the right-hand side is the product of the powered-sums on subvector xα with weights wα along the same elimination order; that is, Pwα xα exp θα(xα) = Pwα kc xkc · · · Pwα k1 xk1 exp θα(xα) , where xα = [xk1, . . . , xkc] should be ranked with increasing index, consisting with the elimination order [x1, . . . , xn] as used in the left-hand side. Proof details can be found in Section E of the supplement. A key advantage of the bound (4) is that it decomposes the joint power sum on x into a product of independent power sums over smaller cliques xα, which significantly reduces computational complexity and enables parallel computation. 4.1 Including Cost-shifting Variables In order to increase the flexibility of the upper bound, we introduce a set of cost-shifting or reparameterization variables δ = {δα i (xi) | ∀(i, α), i ∈α} on each variable-factor pair (i, α), which can be optimized to provide a much tighter upper bound. Note that Φτ(θ) can be rewritten as, Φτ(θ) = log τ X x exp h X i∈V X α∈Ni δα i (xi) + X α∈F θα(xα) − X i∈α δα i (xi) i , where Ni = {α | α ∋i} is the set of cliques incident to i. Applying inequality (4), we have that Φτ(θ) ≤ X i∈V log wi X xi exp h X α∈Ni δα i (xi) i + X α∈F log wα X xα exp h θα(xα) − X i∈α δα i (xi) i def == L(δ, w), (5) where the nodes i ∈V are also treated as cliques within inequality (4), and a new weight wi is introduced on each variable i; the new weights w = {wi, wα i | ∀(i, α), i ∈α} should satisfy wi + X α∈Ni wα i = τi, wi ≥0, wα i ≥0, ∀(i, α). (6) The bound L(δ, w) is convex w.r.t. the cost-shifting variables δ and weights w, enabling an efficient optimization algorithm that we present in Section 5. As we will discuss in Section 5.1, these shifting variables correspond to Lagrange multipliers that enforce a moment matching condition. 4.2 Dual Form and Connection With Existing Bounds It is straightforward to see that our bound in (5) reduces to dual decomposition [31] when applied on MAP inference with all τi = 0, and hence wi = wα i = 0. On the other hand, its connection with sum-inference bounds such as WMB and TRW is seen more clearly via a dual representation of (5): Theorem 4.2. The tightest upper bound obtainable by (5), that is, min w min δ L(δ, w) = min w max b∈L(G) ⟨θ, b⟩+ X i∈V wiH(xi; bi) + X α∈F X i∈α wα i H(xi|xpaα i ; bα) , (7) where b = {bi(xi), bα(xα) | ∀(i, α), i ∈α} is a set of pseudo-marginals (or beliefs) defined on the singleton variables and the cliques, and L is the corresponding local consistency polytope defined by L(G) = {b | bi(xi) = P xα\i bα(xα), P xi bi(xi) = 1}. Here, H(·) are their corresponding marginal or conditional entropies, and paα i is the set of variables in α that rank later than i, that is, for the global elimination order [x1, . . . , xn], paα i = {j ∈α | j ≻i}. The proof details can be found in Section F of the supplement. It is useful to compare Theorem 4.2 with other dual representations. As the sum of non-negatively weighted conditional entropies, the bound is clearly convex and within the general class of conditional entropy decompositions (CED) [5], but unlike generic CED it has a simple and efficient primal form (5). 4 Comparing 4The primal form derived in [5] (a geometric program) is computationally infeasible. 4 (a) 3 × 3 grid (b) WMB: covering tree (c) Full decomposition (d) TRW Figure 1: Illustrating WMB, TRW and our bound on (a) 3 × 3 grid. (b) WMB uses a covering tree with a minimal number of splits and cost-shifting. (c) Our decomposition (5) further splits the graph into small cliques (here, edges), introducing additional cost-shifting variables but allowing for easier, monotonic optimization. (d) Primal TRW splits the graph into many spanning trees, requiring even more cost-shifting variables. Note that all three bounds attain the same tightness after optimization. to the dual form of WMB in Theorem 4.2 of [16], our bound is as tight as WMB, and hence the class of TRW / CED bounds attainable by WMB [16]. Most duality-based forms [e.g., 9, 10] are expressed in terms of joint entropies, ⟨θ, b⟩+ P β cβH(bβ), rather than conditional entropies; while the two can be converted, the resulting counting numbers cβ will be differences of weights {wα i }, 5 which obfuscates its convexity, makes it harder to maintain the relative constraints on the counting numbers during optimization, and makes some counting numbers negative (rendering some methods inapplicable [8]). Finally, like most variational bounds in dual form, the RHS of (7) has a inner maximization and hence guaranteed to bound Φτ(θ) only at its optimum. In contrast, our Eq. (5) is a primal bound (hence, a bound for any δ). It is similar to the primal form of TRW, except that (1) the individual regions are single cliques, rather than spanning trees of the graph, 6 and (2) the fraction weights wα associated with each region are vectors, rather than a single scalar. The representation’s efficiency can be seen with an example in Figure 1, which shows a 3×3 grid model and three relaxations that achieve the same bound. Assuming d states per variable and ignoring the equality constraints, our decomposition in Figure 1(c) uses 24d cost-shifting parameters (δ), and 24 weights. WMB (Figure 1(b)) is slightly more efficient, with only 8d parameters for δ and and 8 weights, but its lack of decomposition makes parallel and monotonic updates difficult. On the other hand, the equivalent primal TRW uses 16 spanning trees, shown in Figure 1(d), for 16 · 8 · d2 parameters, and 16 weights. The increased dimensionality of the optimization slows convergence, and updates are non-local, requiring full message-passing sweeps on the involved trees (although this cost can be amortized in some cases [21]). 5 Monotonically Tightening the Bound In this section, we propose a block coordinate descent algorithm (Algorithm 1) to minimize the upper bound L(δ, w) in (5) w.r.t. the shifting variables δ and weights w. Our algorithm has a monotonic convergence property, and allows efficient, distributable local computation due to the full decomposition of our bound. Our framework allows generic powered-sum inference, including max-, sum-, or mixed-inference as special cases by setting different weights. 5.1 Moment Matching and Entropy Matching We start with deriving the gradient of L(δ, w) w.r.t. δ and w. We show that the zero-gradient equation w.r.t. δ has a simple form of moment matching that enforces a consistency between the singleton beliefs with their related clique beliefs, and that of weights w enforces a consistency of marginal and conditional entropies. Theorem 5.1. (1) For L(δ, w) in (5), its zero-gradient w.r.t. δα i (xi) is ∂L ∂δα i (xi) = µi(xi) − X xα\i µα(xα) = 0, (8) 5See more details of this connection in Section F.3 of the supplement. 6While non-spanning subgraphs can be used in the primal TRW form, doing so leads to loose bounds; in contrast, our decomposition’s terms consist of individual cliques. 5 Algorithm 1 Generalized Dual-decomposition (GDD) Input: weights {τi | i ∈V }, elimination order o. Output: the optimal δ∗, w∗giving tightest upper bound L(δ∗, w∗) for Φτ(θ) in (5). initialize δ = 0 and weights w = {wi, wα i }. repeat for node i (in parallel with node j, (i, j) ̸∈E) do if τi = 0 then update δNi = {δα i |∀α ∈Ni} with the closed-form update (11); else if τi ̸= 0 then update δNi and wNi with gradient descent (8) and(12), combined with line search; end if end for until convergence δ∗←δ, w∗←w, and evaluate L(δ∗, w∗) by (5); Remark. GDD solves max-, sum- and mixed-inference by setting different values of weights {τi}. where µi(xi) ∝exp 1 wi P α∈Ni δα i (xi) can be interpreted as a singleton belief on xi, and µα(xα) can be viewed as clique belief on xα, defined with a chain rule (assuming xα = [x1, . . . , xc]), µα(xα) = Qc i=1 µα(xi|xi+1:c); µα(xi|xi+1:c) = (Zi−1(xi:c)/Zi(xi+1:c))1/wα i , where Zi is the partial powered-sum up to x1:i on the clique, that is, Zi(xi+1:c) = wα i X xi · · · wα 1 X x1 exp h θα(xα) − X i∈α δα i (xi) i , Z0(xα) = exp h θα(xα) − X i∈α δα i (xi) i , where the summation order should be consistent with the global elimination order o = [x1, . . . , xn]. (2) The gradients of L(δ, w) w.r.t. the weights {wi, wα i } are marginal and conditional entropies defined on the beliefs {µi, µα}, respectively, ∂L ∂wi = H(xi; µi), ∂L ∂wα i = H(xi|xi+1:c; µα) = − X xα µα(xα) log µα(xi|xi+1:c). (9) Therefore, the optimal weights should satisfy the following KKT condition wi H(xi; µi) −¯ Hi = 0, wα i H(xi|xi+1:c; µα) −¯ Hi = 0, ∀(i, α) (10) where ¯ Hi = wiH(xi; µi) + P α wα i H(xi|xi+1:c; µα) is the (weighted) average entropy on node i. The proof details can be found in Section G of the supplement. The matching condition (8) enforces that µ = {µi, µα | ∀(i, α)} belong to the local consistency polytope L as defined in Theorem 4.2; similar moment matching results appear commonly in variational inference algorithms [e.g., 34]. [34] also derive a gradient of the weights, but it is based on the free energy form and is correct only after optimization; our form holds at any point, enabling efficient joint optimization of δ and w. 5.2 Block Coordinate Descent We derive a block coordinate descent method in Algorithm 1 to minimize our bound, in which we sweep through all the nodes i and update each block δNi = {δα i (xi) | ∀α ∈Ni} and wNi = {wi, wα i | ∀α ∈Ni} with the neighborhood parameters fixed. Our algorithm applies two update types, depending on whether the variables have zero weight: (1) For nodes with τi = 0 (corresponding to max nodes i ∈B in marginal MAP), we derive a closed-form coordinate descent rule for the associated shifting variables δNi; these nodes do not require to optimize wNi since it is fixed to be zero. (2) For nodes with τi ̸= 0 (e.g., sum nodes i ∈A in marginal MAP), we lack a closed form update for δNi and wNi, and optimize by local gradient descent combined with line search. The lack of a closed form coordinate update for nodes τi ̸= 0 is mainly because the order of power sums with different weights cannot be exchanged. However, the gradient descent inner loop is still efficient, because each gradient evaluation only involves the local variables in clique α. Closed-form Update. For any node i with τi = 0 (i.e., max nodes i ∈B in marginal MAP), and its associated δNi = {δα i (xi) | ∀α ∈Ni}, the following update gives a closed form solution for the zero (sub-)gradient equation in (8) (keeping the other {δα j |j ̸= i, ∀α ∈Ni} fixed): 6 δα i (xi) ← |Ni| |Ni| + 1γα i (xi) − 1 |Ni| + 1 X β∈Ni\α γβ i (xi), (11) where |Ni| is the number of neighborhood cliques, and γα i (xi) = log Pwα \i xα\i exp θα(xα) − P j∈α\i δα j (xj) . Note that the update in (11) works regardless of the weights of nodes {τj | ∀j ∈ α, ∀α ∈Ni} in the neighborhood cliques; when all the neighboring nodes also have zero weight (τj = 0 for ∀j ∈α, ∀α ∈Ni), it is analogous to the “star” update of dual decomposition for MAP [31]. The detailed derivation is shown in Proposition H.2 in the supplement. The update in (11) can be calculated with a cost of only O(|Ni| · d|α|), where d is the number of states of xi, and |α| is the clique size, by computing and saving all the shared {γα i (xi)} before updating δNi. Furthermore, the updates of δNi for different nodes i are independent if they are not directly connected by some clique α; this makes it easy to parallelize the coordinate descent process by partitioning the graph into independent sets, and parallelizing the updates within each set. Local Gradient Descent. For nodes with τi ̸= 0 (or i ∈A in marginal MAP), there is no closedform solution for {δα i (xi)} and {wi, wα i } to minimize the upper bound. However, because of the fully decomposed form, the gradient w.r.t. δNi and wNi, (8)–(9), can be evaluated efficiently via local computation with O(|Ni| · d|α|), and again can be parallelized between nonadjacent nodes. To handle the normalization constraint (6) on wNi, we use an exponential gradient descent: let wi = exp(vi)/ exp(vi) + P α exp(vα i ) and wα i = exp(vα i )/ exp(vi) + P α exp(vα i ) ; taking the gradient w.r.t. vi and vα i and transforming back gives the following update wi ∝wi exp −ηwi H(xi; µi) −¯ Hi , wα i ∝wα i exp −ηwα i H(xi|xpaα i ; µα) −¯ Hi , (12) where η is the step size and paα i ={j ∈α | j ≻i}. In our implementation, we find that a few gradient steps (e.g., 5) with a backtracking line search using the Armijo rule works well in practice. Other more advanced optimization methods, such as L-BFGS and Newton’s method are also applicable. 6 Experiments In this section, we demonstrate our algorithm on a set of real-world graphical models from recent UAI inference challenges, including two diagnostic Bayesian networks with 203 and 359 variables and max domain sizes 7 and 6, respectively, and several MRFs for pedigree analysis with up to 1289 variables, max domain size of 7 and clique size 5.7 We construct marginal MAP problems on these models by randomly selecting half of the variables to be max nodes, and the rest as sum nodes. We implement several algorithms that optimize the same primal marginal MAP bound, including our GDD (Algorithm 1), the WMB algorithm in [16] with ibound = 1, which uses the same cliques and a fixed point heuristic for optimization, and an off-the-shelf L-BFGS implementation that directly optimizes our decomposed bound. For comparison, we also computed several related primal bounds, including standard mini-bucket [2] and elimination reordering [27, 38], limited to the same computational limits (ibound = 1). We also tried MAS [20] but found its bounds extremely loose.8 Decoding (finding a configuration ˆxB) is more difficult in marginal MAP than in joint MAP. We use the same local decoding procedure that is standard in dual decomposition [31]. However, evaluating the objective Q(ˆxB) involves a potentially difficult sum over xA, making it hard to score each decoding. For this reason, we evaluate the score of each decoding, but show the most recent decoding rather than the best (as is standard in MAP) to simulate behavior in practice. Figure 2 and Figure 3 compare the convergence of the different algorithms, where we define the iteration of each algorithm to correspond to a full sweep over the graph, with the same order of time complexity: one iteration for GDD is defined in Algorithm 1; for WMB is a full forward and backward message pass, as in Algorithm 2 of [16]; and for L-BFGS is a joint quasi-Newton step on all variables. The elimination order that we use is obtained by a weighted-min-fill heuristic [1] constrained to eliminate the sum nodes first. Diagnostic Bayesian Networks. Figure 2(a)-(b) shows that our GDD converges quickly and monotonically on both the networks, while WMB does not converge without proper damping; we 7See http://graphmod.ics.uci.edu/uai08/Evaluation/Report/Benchmarks. 8The instances tested have many zero probabilities, which make finding lower bounds difficult; since MAS’ bounds are symmetrized, this likely contributes to its upper bounds being loose. 7 0 5 10 15 20 −50 −40 −30 −20 −10 0 10 Iterations Bound WMB−0.025 WMB−0.035 WMB−0.045 GDD L−BFGS MBE Elimination reordering Decoded value (WMB) Decoded value (GDD) 0 5 10 15 20 −60 −50 −40 −30 −20 −10 0 10 Iterations Bound WMB−0.015 WMB−0.020 WMB−0.025 GDD L−BFGS MBE Elimination reordering Decoded value (WMB) Decoded value (GDD) (a) BN-1 (203 nodes) (b) BN-2 (359 nodes) Figure 2: Marginal MAP results on BN-1 and BN-2 with 50% randomly selected max-nodes (additional plots are in the supplement B). We plot the upper bounds of different algorithms across iterations; the objective function Q(xB) (2) of the decoded solutions xB are also shown (dashed lines). At the beginning, Q(xB) may equal to −∞because of zero probabiliy. 0 5 10 15 20 −55 −50 −45 −40 −35 Iterations Upper Bound WMB−0.01 WMB−0.02 WMB−0.03 WMB−0.04 WMB−0.05 GDD MBE Elim reordering 0 5 10 15 20 −140 −120 −100 −80 −60 Iterations Upper Bound WMB−0.01 WMB−0.02 WMB−0.03 WMB−0.04 WMB−0.05 GDD MBE Elim reordering 0 5 10 15 20 −140 −120 −100 −80 −60 −40 −20 Iterations Upper Bound WMB−0.01 WMB−0.02 WMB−0.03 WMB−0.04 WMB−0.05 GDD MBE Elim reordering (a) pedigree1 (334 nodes) (b) pedigree7 (1068 nodes) (c) pedigree9 (1118 nodes) Figure 3: Marginal MAP inference on three pedigree models (additional plots are in the supplement C). We randomly select half the nodes as max-nodes in these models. We tune the damping rate of WMB from 0.01 to 0.05. experimented different damping ratios for WMB, and found that it is slower than GDD even with the best damping ratio found (e.g., in Figure 2(a), WMB works best with damping ratio 0.035 (WMB0.035), but is still significantly slower than GDD). Our GDD also gives better decoded marginal MAP solution xB (obtained by rounding the singleton beliefs). Both WMB and our GDD provide a much tighter bound than the non-iterative mini-bucket elimination (MBE) [2] or reordered elimination [27, 38] methods. Genetic Pedigree Instances. Figure 3 shows similar results on a set of pedigree instances. Again, GDD outperforms WMB even with the best possible damping, and out-performs the non-iterative bounds after only one iteration (pass through the graph). 7 Conclusion In this work, we propose a new class of decomposition bounds for general powered-sum inference, which is capable of representing a large class of primal variational bounds but is much more computationally efficient. Unlike previous primal sum bounds, our bound decomposes into computations on small, local cliques, increasing efficiency and enabling parallel and monotonic optimization. We derive a block coordinate descent algorithm for optimizing our bound over both the cost-shifting parameters (reparameterization) and weights (fractional counting numbers), which generalizes dual decomposition and enjoy similar monotonic convergence property. Taking the advantage of its monotonic convergence, our new algorithm can be widely applied as a building block for improved heuristic construction in search, or more efficient learning algorithms. Acknowledgments This work is sponsored in part by NSF grants IIS-1065618 and IIS-1254071. Alexander Ihler is also funded in part by the United States Air Force under Contract No. FA8750-14-C-0011 under the DARPA PPAML program. 8 References [1] R. Dechter. Reasoning with probabilistic and deterministic graphical models: Exact algorithms. Synthesis Lectures on Artificial Intelligence and Machine Learning, 2013. [2] R. Dechter and I. Rish. Mini-buckets: A general scheme for bounded inference. JACM, 2003. [3] J. Domke. Dual decomposition for marginal inference. In AAAI, 2011. [4] A. Doucet, S. Godsill, and C. Robert. Marginal maximum a posteriori estimation using Markov chain Monte Carlo. Statistics and Computing, 2002. [5] A. Globerson and T. Jaakkola. Approximate inference using conditional entropy decompositions. In AISTATS, 2007. [6] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP LP-relaxations. In NIPS, 2008. [7] G. H. Hardy, J. E. Littlewood, and G. Polya. Inequalities. Cambridge University Press, 1952. [8] T. Hazan, J. Peng, and A. Shashua. Tightening fractional covering upper bounds on the partition function for high-order region graphs. In UAI, 2012. [9] T. Hazan and A. Shashua. Convergent message-passing algorithms for inference over general graphs with convex free energies. In UAI, 2008. [10] T. Hazan and A. Shashua. Norm-product belief propagation: Primal-dual message-passing for approximate inference. IEEE Transactions on Information Theory, 2010. [11] A. Ihler, N. Flerova, R. Dechter, and L. Otten. Join-graph based cost-shifting schemes. In UAI, 2012. [12] J. Jancsary and G. Matz. Convergent decomposition solvers for TRW free energies. In AISTATS, 2011. [13] I. Kiselev and P. Poupart. Policy optimization by marginal MAP probabilistic inference in generative models. In AAMAS, 2014. [14] N. Komodakis, N. Paragios, and G. Tziritas. MRF energy minimization and beyond via dual decomposition. TPAMI, 2011. [15] Q. Liu. Reasoning and Decisions in Probabilistic Graphical Models–A Unified Framework. PhD thesis, University of California, Irvine, 2014. [16] Q. Liu and A. Ihler. Bounding the partition function using H¨older’s inequality. In ICML, 2011. [17] Q. Liu and A. Ihler. Variational algorithms for marginal MAP. JMLR, 2013. [18] R. Marinescu, R. Dechter, and A. Ihler. AND/OR search for marginal MAP. In UAI, 2014. [19] D. Maua and C. de Campos. Anytime marginal maximum a posteriori inference. In ICML, 2012. [20] C. Meek and Y. Wexler. Approximating max-sum-product problems using multiplicative error bounds. Bayesian Statistics, 2011. [21] T. Meltzer, A. Globerson, and Y. Weiss. Convergent message passing algorithms: a unifying view. In UAI, 2009. [22] O. Meshi and A. Globerson. An alternating direction method for dual MAP LP relaxation. In ECML/PKDD, 2011. [23] O. Meshi, D. Sontag, T. Jaakkola, and A. Globerson. Learning efficiently with approximate inference via dual losses. In ICML, 2010. [24] J. Mooij. libDAI: A free and open source C++ library for discrete approximate inference in graphical models. JMLR, 2010. [25] J. Naradowsky, S. Riedel, and D. Smith. Improving NLP through marginalization of hidden syntactic structure. In EMNLP, 2012. [26] S. Nowozin and C. Lampert. Structured learning and prediction in computer vision. Foundations and Trends in Computer Graphics and Vision, 6, 2011. [27] J. Park and A. Darwiche. Solving MAP exactly using systematic search. In UAI, 2003. [28] J. Park and A. Darwiche. Complexity results and approximation strategies for MAP explanations. JAIR, 2004. [29] W. Ping, Q. Liu, and A. Ihler. Marginal structured SVM with hidden variables. In ICML, 2014. [30] N. Ruozzi and S. Tatikonda. Message-passing algorithms: Reparameterizations and splittings. IEEE Transactions on Information Theory, 2013. [31] D. Sontag, A. Globerson, and T. Jaakkola. Introduction to dual decomposition for inference. Optimization for Machine Learning, 2011. [32] D. Sontag and T. Jaakkola. Tree block coordinate descent for MAP in graphical models. AISTATS, 2009. [33] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations for MAP using message passing. In UAI, 2008. [34] M. Wainwright, T. Jaakkola, and A. Willsky. A new class of upper bounds on the log partition function. IEEE Transactions on Information Theory, 2005. [35] Y. Weiss, C. Yanover, and T. Meltzer. MAP estimation, linear programming and belief propagation with convex free energies. In UAI, 2007. [36] T. Werner. A linear programming approach to max-sum problem: A review. TPAMI, 2007. [37] J. Yarkony, C. Fowlkes, and A. Ihler. Covering trees and lower-bounds on quadratic assignment. In CVPR, 2010. [38] C. Yuan and E. Hansen. Efficient computation of jointree bounds for systematic map search. IJCAI, 2009. [39] C. Yuan, T. Lu, and M. Druzdzel. Annealed MAP. In UAI, 2004. 9 | 2015 | 395 |
5,919 | The Brain Uses Reliability of Stimulus Information when Making Perceptual Decisions Sebastian Bitzer1 sebastian.bitzer@tu-dresden.de Stefan J. Kiebel1 stefan.kiebel@tu-dresden.de 1Department of Psychology, Technische Universit¨at Dresden, 01062 Dresden, Germany Abstract In simple perceptual decisions the brain has to identify a stimulus based on noisy sensory samples from the stimulus. Basic statistical considerations state that the reliability of the stimulus information, i.e., the amount of noise in the samples, should be taken into account when the decision is made. However, for perceptual decision making experiments it has been questioned whether the brain indeed uses the reliability for making decisions when confronted with unpredictable changes in stimulus reliability. We here show that even the basic drift diffusion model, which has frequently been used to explain experimental findings in perceptual decision making, implicitly relies on estimates of stimulus reliability. We then show that only those variants of the drift diffusion model which allow stimulusspecific reliabilities are consistent with neurophysiological findings. Our analysis suggests that the brain estimates the reliability of the stimulus on a short time scale of at most a few hundred milliseconds. 1 Introduction In perceptual decision making participants have to identify a noisy stimulus. In typical experiments, only two possibilities are considered [1]. The amount of noise on the stimulus is usually varied to manipulate task difficulty. With higher noise, participants’ decisions are slower and less accurate. Early psychology research established that biased random walk models explain the response distributions (choice and reaction time) of perceptual decision making experiments [2]. These models describe decision making as an accumulation of noisy evidence until a bound is reached and correspond, in discrete time, to sequential analysis [3] as developed in statistics [4]. More recently, electrophysiological experiments provided additional support for such bounded accumulation models, see [1] for a review. There appears to be a general consensus that the brain implements the mechanisms required for bounded accumulation, although different models were proposed for how exactly this accumulation is employed by the brain [5, 6, 1, 7, 8]. An important assumption of all these models is that the brain provides the input to the accumulation, the so-called evidence, but the most established models actually do not define how this evidence is computed by the brain [3, 5, 9, 1]. In this contribution, we will show that addressing this question offers a new perspective on how exactly perceptual decision making may be performed by the brain. Probabilistic models provide a precise definition of evidence: Evidence is the likelihood of a decision alternative under a noisy measurement where the likelihood is defined through a generative model of the measurements under the hypothesis that the considered decision alternative is true. In particular, this generative model implements assumptions about the expected distribution of measurements. Therefore, the likelihood of a measurement is large when measurements are assumed, 1 by the decision maker, to be reliable and small otherwise. For modelling perceptual decision making experiments, the evidence input, which is assumed to be pre-computed by the brain, should similarly depend on the reliability of measurements as estimated by the brain. However, this has been disputed before, e.g. [10]. The argument is that typical experimental setups make the reliability of each trial unpredictable for the participant. Therefore, it was argued, the brain can have no correct estimate of the reliability. This issue has been addressed in a neurally inspired, probabilistic model based on probabilistic population codes (PPCs) [7]. The authors have shown that PPCs can implement perceptual decision making without having to explicitly represent reliability in the decision process. This remarkable result has been obtained by making the comprehensible assumption that reliability has a multiplicative effect on the tuning curves of the neurons in the PPCs1. Current stimulus reliability, therefore, was implicitly represented in the tuning curves of model neurons and still affected decisions. In this paper we will investigate on a conceptual level whether the brain estimates measurement reliability even within trials while we will not consider the details of its neural representation. We will show that even a simple, widely used bounded accumulation model, the drift diffusion model, is based on some estimate of measurement reliability. Using this result, we will analyse the results of a perceptual decision making experiment [11] and will show that the recorded behaviour together with neurophysiological findings strongly favours the hypothesis that the brain weights evidence using a current estimate of measurement reliability, even when reliability changes unpredictably across trials. This paper is organised as follows: We first introduce the notions of measurement, evidence and likelihood in the context of the experimentally well-established random dot motion (RDM) stimulus. We define these quantities formally by resorting to a simple probabilistic model which has been shown to be equivalent to the drift diffusion model [12, 13]. This, in turn, allows us to formulate three competing variants of the drift diffusion model that either do not use trial-dependent reliability (variant CONST), or do use trial-dependent reliability of measurements during decision making (variants DDM and DEPC, see below for definitions). Finally, using data of [11], we show that only variants DDM and DEPC, which use trial-dependent reliability, are consistent with previous findings about perceptual decision making in the brain. 2 Measurement, evidence and likelihood in the random dot motion stimulus The widely used random dot motion (RDM) stimulus consists of a set of randomly located dots shown within an invisible circle on a screen [14]. From one video frame to the next some of the dots move into one direction which is fixed within a trial of an experiment, i.e., a subset of the dots moves coherently in one direction. All other dots are randomly replaced within the circle. Although there are many variants of how exactly to present the dots [15], the main idea is that the coherently moving dots indicate a motion direction which participants have to decide upon. By varying the proportion of dots which move coherently, also called the ’coherence’ of the stimulus, the difficulty of the task can be varied effectively. We will now consider what kind of evidence the brain can in principle extract from the RDM stimulus in a short time window, for example, from one video frame to the next, within a trial. For simplicity we call this time window ’time point’ from here on, the idea being that evidence is accumulated over different time points, as postulated by bounded accumulation models in perceptual decision making [3, 1]. At a single time point, the brain can measure motion directions from the dots in the RDM display. By construction, a proportion of measurable motion directions will be into one specific direction, but, through the random relocation of other dots, the RDM display will also contain motion in random directions. Therefore, the brain observes a distribution of motion directions at each time point. This distribution can be considered a ’measurement’ of the RDM stimulus made by the brain. Due to the randomness of each time frame, this distribution varies across time points and the variation in the distribution reduces for increasing coherences. We have illustrated this using rose histograms in Fig. 1 for three different coherence levels. 1Note that the precise effect on tuning curves may depend on the particular distribution of measurements and its encoding by the neural population. 2 0° 45° 90° 135° 180° 225° 270° 315° 123456789 0° 45° 90° 135° 180° 225° 270° 315° 2 4 6 8 101214 0° 45° 90° 135° 180° 225° 270° 315° 5 1015202530 0° 45° 90° 135° 180° 225° 270° 315° 2 4 6 8 10 0° 45° 90° 135° 180° 225° 270° 315° 2 4 6 8 10121416 0° 45° 90° 135° 180° 225° 270° 315° 5 1015202530 3.2% time point 1 9.0% 25.6% time point 2 Figure 1: Illustration of possible motion direction distributions that the brain can measure from an RDM stimulus. Rows are different time points, columns are different coherences. The true, underlying motion direction was ’left’, i.e., 180◦. For low coherence (e.g., 3.2%) the measured distribution is very variable across time points and may indicate the presence of many different motion directions at any given time point. As coherence increases (from 9% to 25.6%), the true, underlying motion direction will increasingly dominate measured motion directions simultaneously leading to decreased variation of the measured distribution across time points. To compute the evidence for the decision whether the RDM stimulus contains predominantly motion to one of the two considered directions, e.g., left and right, the brain must check how strongly these directions are represented in the measured distribution, e.g., by estimating the proportion of motion towards left and right. We call these proportions evidence for left, eleft, and evidence for right, eright. As the measured distribution over motion directions may vary strongly across time points, the computed evidences for each single time point may be unreliable. Probabilistic approaches weight evidence by its reliability such that unreliable evidence is not over-interpreted. The question is: Does the brain perform this reliability-based computation as well? More formally, for a given coherence, c, does the brain weight evidence by an estimate of reliability that depends on c: l = e · r(c)2 and which we call ’likelihood’, or does it ignore changing reliabilities and use a weighting unrelated to coherence: e′ = e · ¯r? 3 Bounded accumulation models Bounded accumulation models postulate that decisions are made based on a decision variable. In particular, this decision variable is driven towards the correct alternative and is perturbed by noise. A decision is made, when the decision variable reaches a specific value. In the drift diffusion model, these three components are represented by drift, diffusion and bound [3]. We will now relate the typical drift diffusion formalism to our notions of measurement, evidence and likelihood by linking the drift diffusion model to probabilistic formulations. In the drift diffusion model, the decision variable evolves according to a simple Wiener process with drift. In discrete time the change in the decision variable y can be written as δy = yt −yt−δt = vδt + √ δtsϵt (1) 2For convenience, we use imprecise denominations here. As will become clear below, l is in our case a Gaussian log-likelihood, hence, the linear weighting of evidence by reliability. 3 where v is the drift, ϵt ∼N(0, 1) is Gaussian noise and s controls the amount of diffusion. This equation bears an interesting link to how the brain may compute the evidence. For example, it has been stated in the context of an experiment with RDM stimuli with two decision alternatives that the change in y, often called ’momentary evidence’, ”is thought to be a difference in firing rates of direction selective neurons with opposite direction preferences.” [11, Supp. Fig. 6] Formally: δy = ρleft,t −ρright,t (2) where ρleft,t is the firing rate of the population selective to motion towards left at time point t. Because the firing rates ρ depend on the considered decision alternative, they represent a form of evidence extracted from the stimulus measurement instead of the stimulus measurement itself (see our definitions in the previous section). It is unclear, however, whether the firing rates ρ just represent the evidence (ρ = e′) or whether they represent the likelihood, ρ = l, i.e., the evidence weighted by coherence-dependent reliability. To clarify the relation between firing rates ρ, evidence e and likelihood l we consider probabilistic models of perceptual decision making. Several variants have been suggested and related to other forms of decision making [6, 16, 9, 7, 12, 17, 18]. For its simplicity, which is sufficient for our argument, we here consider the model presented in [13] for which a direct transformation from probabilistic model to the drift diffusion model has already been shown. This model defines two Gaussian generative models of measurements which are derived from the stimulus: p(xt|left) = N(−1, δtˆσ2) p(xt|right) = N(1, δtˆσ2) (3) where ˆσ represents the variability of measurements expected by the brain. Similarly, it is assumed that the measurements xt are sampled from a Gaussian with variance σ2 which captures variance both from the stimulus and due to other noise sources in the brain: xt ∼N(±1, δtσ2) (4) where the mean is −1 for a ’left’ stimulus and 1 for a ’right’ stimulus. Evidence for a decision is computed in this model by calculating the likelihood of a measurement xt under the hypothesised generative models. To be precise we consider the log-likelihood which is lleft = −log( √ 2πδtˆσ) −1 2 (xt −1)2 δtˆσ2 lright = −log( √ 2πδtˆσ) −1 2 (xt + 1)2 δtˆσ2 . (5) We note three important points: 1) The first term on the right hand side means that for decreasing ˆσ the likelihood l increases, when the measurement xt is close to the means, i.e., −1 and 1. This contribution, however, cancels when the difference between the likelihoods for left and right is computed. 2) The likelihood is large for a measurement xt, when xt is close to the corresponding mean. 3) The contribution of the stimulus is weighted by the assumed reliability r = ˆσ−2. This model of the RDM stimulus is simple but captures the most important properties of the stimulus. In particular, a high coherence RDM stimulus has a large proportion of motion in the correct direction with very low variability of measurements whereas a low coherence RDM stimulus tends to have lower proportions of motion in the correct direction, with high variability (cf. Fig. 1). The Gaussian model captures these properties by adjusting the noise variance such that a high coherence corresponds to low noise and low coherence to high noise: Under high noise the values xt will vary strongly and tend to be rather distant from −1 and 1, whereas for low noise the values xt will be close to −1 or 1 with low variability. Hence, as expected, the model produces large evidences/likelihoods for low noise and small evidences/likelihoods for high noise. This intuitive relation between stimulus and probabilistic model is the basis for us to proceed to show that the reliability of the stimulus r, connected to the coherence level c, appears at a prominent position in the drift diffusion model. Crucially, the drift diffusion model can be derived as the sum of log-likelihood ratios across time [3, 9, 12, 13]. In particular, a discrete time drift diffusion process can be derived by subtracting the likelihoods of Eq. (5): δy = lright −lleft = (xt + 1)2 −(xt −1)2 2δtˆσ2 = 2rxt δt . (6) Consequently, the change in y within a trial, in which the true stimulus is constant, is Gaussian: δy ∼N(2r/δt, 4r2σ2/δt). This replicates the model described in [11, Supp. Fig. 6] where the parameterisation of the model, however, more directly followed that of the Gaussian distribution 4 and did not explicitly take time into account: δy ∼N(Kc, S2), where K and S are free parameters and c is coherence of the RDM stimulus. By analogy to the probabilistic model, we, therefore, see that the model in [11] implicitly assumes that reliability r depends on coherence c. More generally, the parameters of the drift diffusion model of Eq. (1) and that of the probabilistic model can be expressed as functions of each other [13]: v = ± 2 δt2ˆσ2 = ±r 2 δt2 (7) s = 2σ δtˆσ2 = r2σ δt . (8) These equations state that both drift v and diffusion s depend on the assumed reliability r of the measurements x. Does the brain use and necessarily compute this reliability which depends on coherence? In the following section we answer this question by comparing how well three variants of the drift diffusion model, that implement different assumptions about r, conform to experimental findings. 4 Use of reliability in perceptual decision making: experimental evidence We first show that different assumptions about the reliability r translate to variants of the drift diffusion model. We then fit all variants to behavioural data (performances and mean reaction times) of an experiment for which neurophysiological data has also been reported [11] and demonstrate that only those variants which allow reliability to depend on coherence level lead to accumulation mechanisms which are consistent with the neurophysiological findings. 4.1 Drift diffusion model variants For the drift diffusion model of Eq. (1) the accuracy A and mean decision time T predicted by the model can be determined analytically [9]: A = 1 − 1 1 + exp( 2vb s2 ) (9) T = b v tanh vb s2 (10) where b is the bound. These equations highlight an important caveat of the drift diffusion model: Only two of the three parameters can be determined uniquely from behavioural data. For fitting the model one of the parameters needs to be fixed. In most cases, the diffusion s is set to c = 0.1 arbitrarily [9], or is fit with a constant value across stimulus strengths [11]. We call this standard variant of the drift diffusion model the DDM. If s is constant across stimulus strengths, the other two parameters of the model must explain differences in behaviour, between stimulus strengths, by taking on values that depend on stimulus strength. Indeed, it has been found that primarily drift v explains such differences, see also below. Eq. (7) states that drift depends on estimated reliability r. So, if drift varies across stimulus strengths, this strongly suggests that r must vary across stimulus strengths, i.e., that r must depend on coherence: r(c). However, the drift diffusion formalism allows for two other obvious variants of parameterisation. One in which the bound b is constant across stimulus strengths, b = ¯b, and, conversely, one in which drift v is constant across stimulus strengths, v = ¯v ∝¯r (Eq. 7). We call these variants DEPC and CONST, respectively, for their property to weight evidence by reliability that either depends on coherence, r(c), or not, ¯r. 4.2 Experimental data In the following we will analyse the data presented in [11]. This data set has two major advantages for our purposes: 1) Reported accuracies and mean reaction times (Fig. 1d,f) are averages based on 15,937 trials in total. Therefore, noise in this data set is minimal (cf. small error bars in Fig. 1d,f) such that any potential effects of overfitting on found parameter values will be small, especially in 5 relation to the effect induced by different stimulus strengths. 2) The behavioural data is accompanied by recordings of neurons which have been implicated in the decision making process. We can, therefore, compare the accumulation mechanisms resulting from the fit to behaviour with the actual neurophysiological recordings. Furthermore, the structure of the experiments was such that the stimulus in subsequent trials had random strength, i.e., the brain could not have estimated stimulus strength of a trial before the trial started. In the experiment of [11], that we consider here, two monkeys performed a two-alternative forced choice task based on the RDM stimulus. Data for eight different coherences were reported. To avoid ceiling effects, which prevent the unique identification of parameter values in the drift diffusion model, we exclude those coherences which lead to an accuracy of 0.5 (random choices) or to an accuracy of 1 (perfect choices). The behavioural data of the remaining six coherence levels are presented in Table 1. Table 1: Behavioural data of [11] used in our analysis. RT = reaction time. coherence (%): 3.2 6.4 9 12 25.6 accuracy (fraction): 0.63 0.76 0.79 0.89 0.99 mean RT (ms): 613 590 580 535 440 The analysis of [11] revealed a nondecision time, i.e., a component of the reaction time that is unrelated to the decision process (cf. [3]) of ca. 200ms. Using this estimate, we determined the mean decision time T by subtracting 200ms from the mean reaction times shown in Table 1. The main findings for the neural recordings, which replicated previous findings [19, 1], were that i) firing rates at the end of decisions were similar and, particularly, showed no significant relation to coherence [11, Fig. 5] whereas ii) the buildup rate of neural firing within a trial had an approximately linear relation to coherence [11, Fig. 4]. 4.3 Fits of drift diffusion model variants to behaviour We can easily fit the model variants (DDM, DEPC and CONST) to accuracy A and mean decision time T using Eqs. (9) and (10). In accordance with previous approaches we selected values for the respective redundant parameters. Since the redundant parameter value, or its inverse, simply scales the fitted parameter values (cf. Eqs. 9 and 10), the exact value is irrelevant and we fix, in each model variant, the redundant parameter to 1. 0 5 10 15 20 25 30 coherence (%) 0 5 10 15 20 25 b DDM 0 5 10 15 20 25 30 coherence (%) 0.00 0.01 0.02 0.03 0.04 0.05 s DEPC 0 5 10 15 20 25 30 coherence (%) 0 10 20 30 40 50 60 70 80 s CONST 0.00 0.02 0.04 0.06 0.08 0.10 v 0.000 0.001 0.002 0.003 0.004 v 0 200 400 600 800 1000 1200 1400 1600 b Figure 2: Fitting results: values of the free parameters, that replicate the accuracy and mean RT recorded in the experiment (Table 1), in relation to coherence. The remaining, non-free parameter was fixed to 1 for each variant. Left: the DDM variant with free parameters drift v (green) and bound b (purple). Middle: the DEPC variant with free parameters v and diffusion s (orange). Right: the CONST variant with free parameters s and b. Fig. 2 shows the inferred parameter values. In congruence with previous findings, the DDM variant explained variation in behaviour due to an increasing coherence mostly with an increasing drift v (green in Fig. 2). Specifically, drift and coherence appear to have a straightforward, linear relation. The same finding holds for the DEPC variant. In contrast to the DDM variant, however, which also exhibited a slight increase in the bound b (purple in Fig. 2) with increasing coherence, the DEPC 6 variant explained the corresponding differences in behaviour by decreasing diffusion s (orange in Fig. 2). As the drift v was fixed in CONST, this variant explained coherence-dependent behaviour with large and almost identical changes in both diffusion s and bound b such that large parameter values occurred for small coherences and the relation between parameters and coherence appeared to be quadratic. 0 200 400 600 800 1000 time from start (ms) −20 −10 0 10 20 y DDM 6.4 25.6 0 100 200 300 time from start (ms) 0 5 10 15 20 mean of y 3.2 6.4 9.0 12.0 25.6 -300 -200 -100 DT time from end (ms) 0 200 400 600 800 1000 time from start (ms) −1.0 −0.5 0.0 0.5 1.0 DEPC 0 100 200 300 time from start (ms) 0.0 0.2 0.4 0.6 0.8 1.0 -300 -200 -100 DT time from end (ms) 0 200 400 600 800 1000 time from start (ms) −600 −400 −200 0 200 400 600 CONST 0 100 200 300 time from start (ms) 0 200 400 600 800 1000 1200 1400 -300 -200 -100 DT time from end (ms) Figure 3: Drift-diffusion properties of fitted model variants. Top row: 15 example trajectories of y for different model variants with fitted parameters for 6.4% (blue) and 25.6% (yellow) coherence. Trajectories end when they reach the bound for the first time which corresponds to the decision time in that simulated trial. Notice that the same random samples of ϵ were used across variants and coherences. Bottom row: Trajectories of y averaged over trials in which the first alternative (top bound) was chosen for the three model variants. Format of the plots follows that of [8, Supp. Fig. 4]: Left panels show the buildup of y from the start of decision making for the 5 different coherences. Right panels show the averaged drift diffusion trajectories when aligned to the time that a decision was made. We further investigated the properties of the model variants with the fitted parameter values. The top row of Fig. 3 shows example drift diffusion trajectories (y in Eq. (1)) simulated at a resolution of 1ms for two coherences. Following [11], we interpret y as the decision variables represented by the firing rates of neurons in monkey area LIP. These plots exemplify that the DDM and DEPC variants lead to qualitatively very similar predictions of neural responses whereas the trajectories produced by the CONST variant stand out, because the neural responses to large coherences are predicted to be smaller than those to small coherences. We have summarised predicted neural responses to all coherences in the bottom row of Fig. 3 where we show averages of y across 5000 trials either aligned to the start of decision making (left panels) or aligned to the decision time (right panels). These plots illustrate that the DDM and DEPC variants replicate the main neurophysiological findings of [11]: Neural responses at the end of the decision were similar and independent of coherence. For the DEPC variant this was built into the model, because the bound was fixed. For the DDM variant the bound shows a small dependence on coherence, but the neural responses aligned to decision time were still very similar across coherences. The DDM and DEPC variants, further, replicate the finding that the buildup of neural firing depends approximately linear on coherence (normalised mean square error of a corresponding linear model was 0.04 and 0.03, respectively). In contrast, the CONST variant exhibited an inverse relation between coherence and buildup of predicted neural response, i.e., buildup was larger for small coherences. Furthermore, neural responses at decision time strongly depended on coherence. Therefore, the CONST variant, as the only variant which does not use coherence-dependent reliability, is also the only variant which is clearly inconsistent with the neurophysiological findings. 7 5 Discussion We have investigated whether the brain uses online estimates of stimulus reliability when making simple perceptual decisions. From a probabilistic perspective fundamental considerations suggest that using accurate estimates of stimulus reliability lead to better decisions, but in the field of perceptual decision making it has been questioned that the brain estimates stimulus reliability on the very short time scale of a few hundred milliseconds. By using a probabilistic formulation of the most widely accepted model we were able to show that only those variants of the model which assume online reliability estimation are consistent with reported experimental findings. Our argument is based on a strict distinction between measurements, evidence and likelihood which may be briefly summarised as follows: Measurements are raw stimulus features that do not relate to the decision, evidence is a transformation of measurements into a decision relevant space reflecting the decision alternatives and likelihood is evidence scaled by a current estimate of measurement reliabilities. It is easy to overlook this distinction at the level of bounded accumulation models, such as the drift diffusion model, because these models assume a pre-computed form of evidence as input. However, this evidence has to be computed by the brain, as we have demonstrated based on the example of the RDM stimulus and using behavioural data. We chose one particular, simple probabilistic model, because this model has a direct equivalence with the drift diffusion model which was used to explain the data of [11] before. Other models may have not allowed conclusions about reliability estimates in the brain. In particular, [13] introduced an alternative model that also leads to equivalence with the drift diffusion model, but explains differences in behaviour by different mean measurements and their representations in the generative model. Instead of varying reliability across coherences, this model would vary the difference of means in the second summand of Eq. (5) directly without leading to any difference on the drift diffusion trajectories represented by y of Eq. (1) when compared to those of the probabilistic model chosen here. The interpretation of the alternative model of [13], however, is far removed from basic assumptions about the RDM stimulus: Whereas the alternative model assumes that the reliability of the stimulus is fixed across coherences, the noise in the RDM stimulus clearly depends on coherence. We, therefore, discarded the alternative model here. As a slight caveat, the neurophysiological findings, on which we based our conclusion, could have been the result of a search for neurons that exhibit the properties of the conventional drift diffusion model (the DDM variant). We cannot exclude this possibility completely, but given the wide range and persistence of consistent evidence for the standard bounded accumulation theory of decision making [1, 20] we find it rather unlikely that the results in [19] and [11] were purely found by chance. Even if our conclusion about the rapid estimation of reliability by the brain does not endure, our formal contribution holds: We clarified that the drift diffusion model in its most common variant (DDM) is consistent with, and even implicitly relies on, coherence-dependent estimates of measurement reliability. In the experiment of [11] coherences of the RDM stimulus were chosen randomly for each trial. Consequently, participants could not predict the reliability of the RDM stimulus for the upcoming trial, i.e., the participants’ brains could not have had a good estimate of stimulus reliability at the start of a trial. Yet, our analysis strongly suggests that coherence-dependent reliabilities were used during decision making. The brain, therefore, must had adapted reliability within trials even on the short timescale of a few hundred milliseconds. On the level of analysis dictated by the drift diffusion model we cannot observe this adaptation. It only manifests itself as a change in mean drift that is assumed to be constant within a trial. First models of simultaneous decision making and reliability estimation have been suggested [21], but clearly more work in this direction is needed to elucidate the underlying mechanism used by the brain. References [1] Joshua I Gold and Michael N Shadlen. The neural basis of decision making. Annu Rev Neurosci, 30:535–574, 2007. [2] I. D. John. A statistical decision theory of simple reaction time. Australian Journal of Psychology, 19(1):27–34, 1967. 8 [3] R. Duncan Luce. Response Times: Their Role in Inferring Elementary Mental Organization. Number 8 in Oxford Psychology Series. Oxford University Press, 1986. [4] Abraham Wald. Sequential Analysis. Wiley, New York, 1947. [5] Xiao-Jing Wang. Probabilistic decision making by slow reverberation in cortical circuits. Neuron, 36(5):955–968, Dec 2002. [6] Rajesh P N Rao. Bayesian computation in recurrent neural circuits. Neural Comput, 16(1):1– 38, Jan 2004. [7] Jeffrey M Beck, Wei Ji Ma, Roozbeh Kiani, Tim Hanks, Anne K Churchland, Jamie Roitman, Michael N Shadlen, Peter E Latham, and Alexandre Pouget. Probabilistic population codes for Bayesian decision making. Neuron, 60(6):1142–1152, December 2008. [8] Anne K Churchland, R. Kiani, R. Chaudhuri, Xiao-Jing Wang, Alexandre Pouget, and M. N. Shadlen. Variance as a signature of neural computations during decision making. Neuron, 69(4):818–831, Feb 2011. [9] Rafal Bogacz, Eric Brown, Jeff Moehlis, Philip Holmes, and Jonathan D. Cohen. The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks. Psychol Rev, 113(4):700–765, October 2006. [10] Michael N Shadlen, Roozbeh Kiani, Timothy D Hanks, and Anne K Churchland. Neurobiology of decision making: An intentional framework. In Christoph Engel and Wolf Singer, editors, Better Than Conscious? Decision Making, the Humand Mind, and Implications For Institutions. MIT Press, 2008. [11] Anne K Churchland, Roozbeh Kiani, and Michael N Shadlen. Decision-making with multiple alternatives. Nat Neurosci, 11(6):693–702, Jun 2008. [12] Peter Dayan and Nathaniel D Daw. Decision theory, reinforcement learning, and the brain. Cogn Affect Behav Neurosci, 8(4):429–453, Dec 2008. [13] Sebastian Bitzer, Hame Park, Felix Blankenburg, and Stefan J Kiebel. Perceptual decision making: Drift-diffusion model is equivalent to a bayesian model. Frontiers in Human Neuroscience, 8(102), 2014. [14] W. T. Newsome and E. B. Par´e. A selective impairment of motion perception following lesions of the middle temporal visual area MT. J Neurosci, 8(6):2201–2211, June 1988. [15] Praveen K. Pilly and Aaron R. Seitz. What a difference a parameter makes: a psychophysical comparison of random dot motion algorithms. Vision Res, 49(13):1599–1612, Jun 2009. [16] Angela J. Yu and Peter Dayan. Inference, attention, and decision in a Bayesian neural architecture. In Lawrence K. Saul, Yair Weiss, and L´eon Bottou, editors, Advances in Neural Information Processing Systems 17, pages 1577–1584. MIT Press, Cambridge, MA, 2005. [17] Alec Solway and Matthew M. Botvinick. Goal-directed decision making as probabilistic inference: a computational framework and potential neural correlates. Psychol Rev, 119(1):120– 154, January 2012. [18] Yanping Huang, Abram Friesen, Timothy Hanks, Mike Shadlen, and Rajesh Rao. How prior probability influences decision making: A unifying probabilistic model. In P. Bartlett, F.C.N. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1277–1285. 2012. [19] Jamie D Roitman and Michael N Shadlen. Response of neurons in the lateral intraparietal area during a combined visual discrimination reaction time task. J Neurosci, 22(21):9475–9489, Nov 2002. [20] Timothy D. Hanks, Charles D. Kopec, Bingni W. Brunton, Chunyu A. Duan, Jeffrey C. Erlich, and Carlos D. Brody. Distinct relationships of parietal and prefrontal cortices to evidence accumulation. Nature, Jan 2015. [21] Sophie Den`eve. Making decisions with unknown sensory reliability. Front Neurosci, 6:75, 2012. 9 | 2015 | 396 |
5,920 | Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering Haoyuan Gao1 Junhua Mao2 Jie Zhou1 Zhiheng Huang1 Lei Wang1 Wei Xu1 1Baidu Research 2University of California, Los Angeles gaohaoyuan@baidu.com, mjhustc@ucla.edu, {zhoujie01,huangzhiheng,wanglei22,wei.xu}@baidu.com Abstract In this paper, we present the mQA model, which is able to answer questions about the content of an image. The answer can be a sentence, a phrase or a single word. Our model contains four components: a Long Short-Term Memory (LSTM) to extract the question representation, a Convolutional Neural Network (CNN) to extract the visual representation, an LSTM for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. We construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset to train and evaluate our mQA model. It contains over 150,000 images and 310,000 freestyle Chinese question-answer pairs and their English translations. The quality of the generated answers of our mQA model on this dataset is evaluated by human judges through a Turing Test. Specifically, we mix the answers provided by humans and our model. The human judges need to distinguish our model from the human. They will also provide a score (i.e. 0, 1, 2, the larger the better) indicating the quality of the answer. We propose strategies to monitor the quality of this evaluation process. The experiments show that in 64.7% of cases, the human judges cannot distinguish our model from humans. The average score is 1.454 (1.918 for human). The details of this work, including the FM-IQA dataset, can be found on the project page: http://idl.baidu.com/FM-IQA.html. 1 Introduction Recently, there is increasing interest in the field of multimodal learning for both natural language and vision. In particular, many studies have made rapid progress on the task of image captioning [26, 15, 14, 40, 6, 8, 4, 19, 16, 42]. Most of them are built based on deep neural networks (e.g. deep Convolutional Neural Networks (CNN [17]), Recurrent Neural Network (RNN [7]) or Long Short-Term Memory (LSTM [12])). The large-scale image datasets with sentence annotations (e.g., [21, 43, 11]) play a crucial role in this progress. Despite the success of these methods, there are still many issues to be discussed and explored. In particular, the task of image captioning only requires generic sentence descriptions of an image. But in many cases, we only care about a particular part or object of an image. The image captioning task lacks the interaction between the computer and the user (as we cannot input our preference and interest). In this paper, we focus on the task of visual question answering. In this task, the method needs to provide an answer to a freestyle question about the content of an image. We propose the mQA model to address this task. The inputs of the model are an image and a question. This model has four components (see Figure 2). The first component is an LSTM network that encodes a natural language sentence into a dense vector representation. The second component is a deep Convolutional Neural Network [36] that extracted the image representation. This component was pre-trained on ImageNet Classification Task [33] and is fixed during the training. The third component is another LSTM network that encodes the information of the current word and previous words in the answer into dense representations. The fourth component fuses the information from the first three components to predict the next word in the answer. We jointly train the first, third and fourth components by maximizing the probability of the groundtruth answers in the training set using a log-likelihood loss 1 Image Question Answer 公共汽车是什么颜色的? What is the color of the bus? 公共汽车是红色的。 The bus is red. 草地上除了人以外还有什么动物? What is there on the grass, except the person? 羊。 Sheep. 观察一下说出食物里任意一种蔬菜的 名字 ? Please look carefully and tell me what is the name of the vegetables in the plate? 西兰花 。 Broccoli. 猫咪在哪里? Where is the kitty? 在椅子上 。 On the chair. 黄色的是什么? What is there in yellow? 香蕉。 Bananas. Figure 1: Sample answers to the visual question generated by our model on the newly proposed Freestyle Multilingual Image Question Answering (FM-IQA) dataset. function. To lower down the risk of overfitting, we allow the weight sharing of the word embedding layer between the LSTMs in the first and third components. We also adopt the transposed weight sharing scheme as proposed in [25], which allows the weight sharing between word embedding layer and the fully connected Softmax layer. To train our method, we construct a large-scale Freestyle Multilingual Image Question Answering dataset1 (FM-IQA, see details in Section 4) based on the MS COCO dataset [21]. The current version of the dataset contains 158,392 images with 316,193 Chinese question-answer pairs and their corresponding English translations.2 To diversify the annotations, the annotators are allowed to raise any question related to the content of the image. We propose strategies to monitor the quality of the annotations. This dataset contains a wide range of AI related questions, such as action recognition (e.g., “Is the man trying to buy vegetables?”), object recognition (e.g., “What is there in yellow?”), positions and interactions among objects in the image (e.g. “Where is the kitty?”) and reasoning based on commonsense and visual content (e.g. “Why does the bus park here?”, see last column of Figure 3). Because of the variability of the freestyle question-answer pairs, it is hard to accurately evaluate the method with automatic metrics. We conduct a Visual Turing Test [38] using human judges. Specifically, we mix the question-answer pairs generated by our model with the same set of questionanswer pairs labeled by annotators. The human judges need to determine whether the answer is given by a model or a human. In addition, we also ask them to give a score of 0 (i.e. wrong), 1 (i.e. partially correct), or 2 (i.e. correct). The results show that our mQA model passes 64.7% of this test (treated as answers of a human) and the average score is 1.454. In the discussion, we analyze the failure cases of our model and show that combined with the m-RNN [24] model, our model can automatically ask a question about an image and answer that question. 2 Related Work Recent work has made significant progress using deep neural network models in both the fields of computer vision and natural language. For computer vision, methods based on Convolutional Neural Network (CNN [20]) achieve the state-of-the-art performance in various tasks, such as object classification [17, 34, 17], detection [10, 44] and segmentation [3]. For natural language, the Recurrent Neural Network (RNN [7, 27]) and the Long Short-Term Memory network (LSTM [12]) are also widely used in machine translation [13, 5, 35] and speech recognition [28]. The structure of our mQA model is inspired by the m-RNN model [24] for the image captioning and image-sentence retrieval tasks. It adopts a deep CNN for vision and a RNN for language. We extend the model to handle the input of question and image pairs, and generate answers. In the experiments, we find that we can learn how to ask a good question about an image using the m-RNN model and this question can be answered by our mQA model. There has been recent effort on the visual question answering task [9, 2, 22, 37]. However, most of them use a pre-defined and restricted set of questions. Some of these questions are generated from a template. In addition, our FM-IQA dataset is much larger than theirs (e.g., there are only 2591 and 1449 images for [9] and [22] respectively). 1We are actively developing and expanding the dataset, please find the latest information on the project page : http://idl.baidu.com/FM-IQA.html 2The results reported in this paper are obtained from a model trained on the first version of the dataset (a subset of the current version) which contains 120,360 images and 250,569 question-answer pairs. 2 What is the doing cat ? <BOA> Sitting on umbrella the CNN LSTM Embedding Fusing Sitting on umbrella the <EOA> Shared Shared Intermediate Softmax Figure 2: Illustration of the mQA model architecture. We input an image and a question about the image (i.e. “What is the cat doing?”) to the model. The model is trained to generate the answer to the question (i.e. “Sitting on the umbrella”). The weight matrix in the word embedding layers of the two LSTMs (one for the question and one for the answer) are shared. In addition, as in [25], this weight matrix is also shared, in a transposed manner, with the weight matrix in the Softmax layer. Different colors in the figure represent different components of the model. (Best viewed in color.) There are some concurrent and independent works on this topic: [1, 23, 32]. [1] propose a largescale dataset also based on MS COCO. They also provide some simple baseline methods on this dataset. Compared to them, we propose a stronger model for this task and evaluate our method using human judges. Our dataset also contains two different kinds of language, which can be useful for other tasks, such as machine translation. Because we use a different set of annotators and different requirements of the annotation, our dataset and the [1] can be complementary to each other, and lead to some interesting topics, such as dataset transferring for visual question answering. Both [23] and [32] use a model containing a single LSTM and a CNN. They concatenate the question and the answer (for [32], the answer is a single word. [23] also prefer a single word as the answer), and then feed them to the LSTM. Different from them, we use two separate LSTMs for questions and answers respectively in consideration of the different properties (e.g. grammar) of questions and answers, while allow the sharing of the word-embeddings. For the dataset, [23] adopt the dataset proposed in [22], which is much smaller than our FM-IQA dataset. [32] utilize the annotations in MS COCO and synthesize a dataset with four pre-defined types of questions (i.e. object, number, color, and location). They also synthesize the answer with a single word. Their dataset can also be complementary to ours. 3 The Multimodal QA (mQA) Model We show the architecture of our mQA model in Figure 2. The model has four components: (I). a Long Short-Term Memory (LSTM [12]) for extracting semantic representation of a question, (II). a deep Convolutional Neural Network (CNN) for extracting the image representation, (III). an LSTM to extract representation of the current word in the answer and its linguistic context, and (IV). a fusing component that incorporates the information from the first three parts together and generates the next word in the answer. These four components can be jointly trained together 3. The details of the four model components are described in Section 3.1. The effectiveness of the important components and strategies are analyzed in Section 5.3. The inputs of the model are a question and the reference image. The model is trained to generate the answer. The words in the question and answer are represented by one-hot vectors (i.e. binary vectors with the length of the dictionary size N and have only one non-zero vector indicating its index in the word dictionary). We add a ⟨BOA⟩sign and a ⟨EOA⟩sign, as two spatial words in the word dictionary, at the beginning and the end of the training answers respectively. They will be used for generating the answer to the question in the testing stage. In the testing stage, we input an image and a question about the image into the model first. To generate the answer, we start with the start sign ⟨BOA⟩and use the model to calculate the probability distribution of the next word. We then use a beam search scheme that keeps the best K candidates 3In practice, we fix the CNN part because the gradient returned from LSTM is very noisy. Finetuning the CNN takes a much longer time than just fixing it, and does not improve the performance significantly. 3 with the maximum probabilities according to the Softmax layer. We repeat the process until the model generates the end sign of the answer ⟨BOA⟩. 3.1 The Four Components of the mQA Model (I). The semantic meaning of the question is extracted by the first component of the model. It contains a 512 dimensional word embedding layer and an LSTM layer with 400 memory cells. The function of the word embedding layer is to map the one-hot vector of the word into a dense semantic space. We feed this dense word representation into the LSTM layer. LSTM [12] is a Recurrent Neural Network [7] that is designed for solving the gradient explosion or vanishing problem. The LSTM layer stores the context information in its memory cells and serves as the bridge among the words in a sequence (e.g. a question). To model the long term dependency in the data more effectively, LSTM add three gate nodes to the traditional RNN structure: the input gate, the output gate and the forget gate. The input gate and output gate regulate the read and write access to the LSTM memory cells. The forget gate resets the memory cells when their contents are out of date. Different from [23, 32], the image representation does not feed into the LSTM in this component. We believe this is reasonable because questions are just another input source for the model, so we should not add images as the supervision for them. The information stored in the LSTM memory cells of the last word in the question (i.e. the question mark) will be treated as the representation of the sentence. (II). The second component is a deep Convolutional Neural Network (CNN) that generates the representation of an image. In this paper, we use the GoogleNet [36]. Note that other CNN models, such as AlexNet [17] and VggNet [34], can also be used as the component in our model. We remove the final SoftMax layer of the deep CNN and connect the remaining top layer to our model. (III). The third component also contains a word embedding layer and an LSTM. The structure is similar to the first component. The activation of the memory cells for the words in the answer, as well as the word embeddings, will be fed into the fusing component to generate the next words in the answer. In [23, 32], they concatenate the training question and answer, and use a single LSTM. Because of the different properties (i.e. grammar) of question and answer, in this paper, we use two separate LSTMs for questions and answers respectively. We denote the LSTMs for the question and the answer as LSTM(Q) and LSTM(A) respectively in the rest of the paper. The weight matrix in LSTM(Q) is not shared with the LSTM(A) in the first components. Note that the semantic meaning of single words should be the same for questions and answers so that we share the parameters in the word-embedding layer for the first and third component. (IV). Finally, the fourth component fuses the information from the first three layers. Specifically, the activation of the fusing layer f(t) for the tth word in the answer can be calculated as follows: f(t) = g(VrQrQ + VII + VrArA(t) + Vww(t)); (1) where “+” denotes element-wise addition, rQ stands for the activation of the LSTM(Q) memory cells of the last word in the question, I denotes the image representation, rA(t) and w(t) denotes the activation of the LSTM(A) memory cells and the word embedding of the tth word in the answer respectively. VrQ, VI, VrA, and Vw are the weight matrices that need to be learned. g(.) is an element-wise non-linear function. After the fusing layer, we build an intermediate layer that maps the dense multimodal representation in the fusing layer back to the dense word representation. We then build a fully connected Softmax layer to predict the probability distribution of the next word in the answer. This strategy allows the weight sharing between word embedding layer and the fully connected Softmax layer as introduced in [25] (see details in Section 3.2). Similar to [25], we use the sigmoid function as the activation function of the three gates and adopt ReLU [30] as the non-linear function for the LSTM memory cells. The non-linear activation function for the word embedding layer, the fusing layer and the intermediate layer is the scaled hyperbolic tangent function [20]: g(x) = 1.7159 · tanh( 2 3x). 3.2 The Weight Sharing Strategy As mentioned in Section 2, our model adopts different LSTMs for the question and the answer because of the different grammar properties of questions and answers. However, the meaning of 4 single words in both questions and answers should be the same. Therefore, we share the weight matrix between the word-embedding layers of the first component and the third component. In addition, this weight matrix for the word-embedding layers is shared with the weight matrix in the fully connected Softmax layer in a transposed manner. Intuitively, the function of the weight matrix in the word-embedding layer is to encode the one-hot word representation into a dense word representation. The function of the weight matrix in the Softmax layer is to decode the dense word representation into a pseudo one-word representation, which is the inverse operation of the wordembedding. This strategy will reduce nearly half of the parameters in the model and is shown to have better performance in image captioning and novel visual concept learning tasks [25]. 3.3 Training Details The CNN we used is pre-trained on the ImageNet classification task [33]. This component is fixed during the QA training. We adopt a log-likelihood loss defined on the word sequence of the answer. Minimizing this loss function is equivalent to maximizing the probability of the model to generate the groundtruth answers in the training set. We jointly train the first, second and the fourth components using stochastic gradient decent method. The initial learning rate is 1 and we decrease it by a factor of 10 for every epoch of the data. We stop the training when the loss on the validation set does not decrease within three epochs. The hyperparameters of the model are selected by cross-validation. For the Chinese question answering task, we segment the sentences into several word phrases. These phrases can be treated equivalently to the English words. 4 The Freestyle Multilingual Image Question Answering (FM-IQA) Dataset Our method is trained and evaluated on a large-scale multilingual visual question answering dataset. In Section 4.1, we will describe the process to collect the data, and the method to monitor the quality of annotations. Some statistics and examples of the dataset will be given in Section 4.2. The latest dataset is available on the project page: http://idl.baidu.com/FM-IQA.html 4.1 The Data Collection We start with the 158,392 images from the newly released MS COCO [21] training, validation and testing set as the initial image set. The annotations are collected using Baidu’s online crowdsourcing server4. To make the labeled question-answer pairs diversified, the annotators are free to give any type of questions, as long as these questions are related to the content of the image. The question should be answered by the visual content and commonsense (e.g., we are not expecting to get questions such as “What is the name of the person in the image?”). The annotators need to give an answer to the question themselves. On the one hand, the freedom we give to the annotators is beneficial in order to get a freestyle, interesting and diversified set of questions. On the other hand, it makes it harder to control the quality of the annotation compared to a more detailed instruction. To monitor the annotation quality, we conduct an initial quality filtering stage. Specifically, we randomly sampled 1,000 images as a quality monitoring dataset from the MS COCO dataset as an initial set for the annotators (they do not know this is a test). We then sample some annotations and rate their quality after each annotator finishes some labeling on this quality monitoring dataset (about 20 question-answer pairs per annotator). We only select a small number of annotators (195 individuals) whose annotations are satisfactory (i.e. the questions are related to the content of the image and the answers are correct). We also give preference to the annotators who provide interesting questions that require high level reasoning to give the answer. Only the selected annotators are permitted to label the rest of the images. We pick a set of good and bad examples of the annotated question-answer pairs from the quality monitoring dataset, and show them to the selected annotators as references. We also provide reasons for selecting these examples. After the annotation of all the images is finished, we further refine the dataset and remove a small portion of the images with badly labeled questions and answers. 4.2 The Statistics of the Dataset Currently there are 158,392 images with 316,193 Chinese question-answer pairs and their English translations. Each image has at least two question-answer pairs as annotations. The average lengths 4http://test.baidu.com 5 Image GT Question GT Answer 戴帽子的男孩在干什么? What is the boy in green cap doing? 他在玩滑板。 He is playing skateboard. 图片中有人么? Is there any person in the image? 有。 Yes. 电脑在老人的左面还是右面? Is the computer on the right hand or left hand side of the gentleman? 右手侧。 On the right hand side. 飞盘是什么颜色? What is the color of the frisbee? 黄色。 Yellow. 公交车停在那干吗? Why does the bus park there? 准备维修。 Preparing for repair. GT Question GT Answer 房间里的沙发是什么质地的? What is the texture of the sofa in the room? 布艺。 Cloth. 这个人在挑菜么? Is the man trying to buy vegetables? 是的。 Yes. 这个蛋糕是几层的? How many layers are there for the cake? 六层。 Six. 这些人在做什么? What are the people doing? 打雨伞步行。 Walking with umbrellas. 手机,鼠标,电脑混放表示什么? What does it indicate when the phone, mouse and laptop are placed together? 主人困了,睡着了 Their owner is tired and sleeping. Figure 3: Sample images in the FM-IQA dataset. This dataset contains 316,193 Chinese questionanswer pairs with corresponding English translations. of the questions and answers are 7.38 and 3.82 respectively measured by Chinese words. Some sample images are shown in Figure 3. We randomly sampled 1,000 question-answer pairs and their corresponding images as the test set. The questions in this dataset are diversified, which requires a vast set of AI capabilities in order to answer them. They contain some relatively simple image understanding questions of, e.g., the actions of objects (e.g., “What is the boy in green cap doing?”), the object class (e.g., “Is there any person in the image?”), the relative positions and interactions among objects (e.g., “Is the computer on the right or left side of the gentleman?”), and the attributes of the objects (e.g., “What is the color of the frisbee?”). In addition, the dataset contains some questions that need a high-level reasoning with clues from vision, language and commonsense. For example, to answer the question of “Why does the bus park there?”, we should know that this question is about the parked bus in the image with two men holding tools at the back. Based on our commonsense, we can guess that there might be some problems with the bus and the two men in the image are trying to repair it. These questions are hard to answer but we believe they are actually the most interesting part of the questions in the dataset. We categorize the questions into 8 types and show the statistics of them on the project page. The answers are also diversified. The annotators are allowed to give a single phrase or a single word as the answer (e.g. “Yellow”) or, they can give a complete sentence (e.g. “The frisbee is yellow”). 5 Experiments For the very recent works for visual question answering ([32, 23]), they test their method on the datasets where the answer of the question is a single word or a short phrase. Under this setting, it is plausible to use automatic evaluation metrics that measure the single word similarity, such as Wu-Palmer similarity measure (WUPS) [41]. However, for our newly proposed dataset, the answers in the dataset are freestyle and can be complete sentences. For most of the cases, there are numerous choices of answers that are all correct. The possible alternatives are BLEU score [31], METEOR [18], CIDEr [39] or other metrics that are widely used in the image captioning task [24]. The problem of these metrics is that there are only a few words in an answer that are semantically critical. These metrics tend to give equal weights (e.g. BLEU and METEOR) or different weights according to the tf-idf frequency term (e.g. CIDEr) of the words in a sentence, hence cannot fully show the importance of the keywords. The evaluation of the image captioning task suffers from the same problem (not as severe as question answering because it only needs a general description). To avoid these problems, we conduct a real Visual Turing Test using human judges for our model, which will be described in details in Section 5.1. In addition, we rate each generated sentences with a score (the larger the better) in Section 5.2, which gives a more fine-grained evaluation of our method. In Section 5.3, we provide the performance comparisons of different variants of our mQA model on the validation set. 6 Visual Turing Test Human Rated Scores Pass Fail Pass Rate (%) 2 1 0 Avg. Score Human 948 52 94.8 927 64 9 1.918 blind-QA 340 660 34.0 mQA 647 353 64.7 628 198 174 1.454 Table 1: The results of our mQA model for our FM-IQA dataset. 5.1 The Visual Turing Test In this Visual Turing Test, a human judge will be presented with an image, a question and the answer to the question generated by the testing model or by human annotators. He or she need to determine, based on the answer, whether the answer is given by a human (i.e. pass the test) or a machine (i.e. fail the test). In practice, we use the images and questions from the test set of our FM-IQA dataset. We use our mQA model to generate the answer for each question. We also implement a baseline model of the question answering without visual information. The structure of this baseline model is similar to mQA, except that we do not feed the image information extracted by the CNN into the fusing layer. We denote it as blind-QA. The answers generated by our mQA model, the blind-QA model and the groundtruth answer are mixed together. This leads to 3000 question answering pairs with the corresponding images, which will be randomly assigned to 12 human judges. The results are shown in Table 1. It shows that 64.7% of the answers generated by our mQA model are treated as answers provided by a human. The blind-QA performs very badly in this task. But some of the generated answers pass the test. Because some of the questions are actually multi-choice questions, it is possible to get a correct answer by random guess based on pure linguistic clues. To study the variance of the VTT evaluation across different sets of human judges, we conduct two additional evaluations with different groups of judges under the same setting. The standard deviations of the passing rate are 0.013, 0.019 and 0.024 for human, the blind-mQA model and mQA model respectively. It shows that VTT is a stable and reliable evaluation metric for this task. 5.2 The Score of the Generated Answer The Visual Turing Test only gives a rough evaluation of the generated answers. We also conduct a fine-grained evaluation with scores of “0”, “1”, or “2”. “0” and “2” mean that the answer is totally wrong and perfectly correct respectively. “1” means that the answer is only partially correct (e.g., the general categories are right but the sub-categories are wrong) and makes sense to the human judges. The human judges for this task are not necessarily the same people for the Visual Turing Test. After collecting the results, we find that some human judges also rate an answer with “1” if the question is very hard to answer so that even a human, without carefully looking at the image, will possibly make mistakes. We show randomly sampled images whose scores are “1” in Figure 4. The results are shown in Table 1. We show that among the answers that are not perfectly correct (i.e. scores are not 2), over half of them are partially correct. Similar to the VTT evaluation process, we also conducts two additional groups of this scoring evaluation. The standard deviations of human and our mQA model are 0.020 and 0.041 respectively. In addition, for 88.3% and 83.9% of the cases, the three groups give the same score for human and our mQA model respectively. 5.3 Performance Comparisons of the Different mQA Variants In order to show the effectiveness of the different components and strategies of our mQA model, we implement three variants of the mQA in Figure 2. For the first variant (i.e. “mQA-avg-question”), we replace the first LSTM component of the model (i.e. the LSTM to extract the question embedding) Image Question Answer 盘子里有什么? What is in the plate? 食物。 food. 狗在干嘛? What is the dog doing? 在冲浪。 Surfing in the sea. 小猫在哪里? Where is the cat? 床上。 On the bed. 这是什么车? What is the type of the vehicle? 火车 Train. 这是什么? What is there in the image? 这是钟表。 There is a clock. Figure 4: Random examples of the answers generated by the mQA model with score “1” given by the human judges. 7 Image Generated Question Answer 这是在什么地方? Where is this? 这是在厨房。 This is the kitchen room. 这是什么食物? What kind of food is this? 披萨。 Pizza. 电脑在哪里? Where is the computer? 在桌子上。 On the desk. 这个人在打网球么? Is this guy playing tennis? 是的。 Yes. Figure 5: The sample generated questions by our model and their answers. Word Error Loss mQA-avg-question 0.442 2.17 mQA-same-LSTMs 0.439 2.09 mQA-noTWS 0.438 2.14 mQA-complete 0.393 1.91 Table 2: Performance comparisons of the different mQA variants. with the average embedding of the words in the question using word2vec [29]. It is used to show the effectiveness of the LSTM as a question embedding learner and extractor. For the second variant (i.e. “mQAsame-LSTMs”), we use two shared-weights LSTMs to model question and answer. It is used to show the effectiveness of the decoupling strategy of the weights of the LSTM(Q) and the LSTM(A) in our model. For the third variant (i.e. “mQA-noTWS”), we do not adopt the Transposed Weight Sharing (TWS) strategy. It is used to show the effectiveness of TWS. The word error rates and losses of the three variants and the complete mQA model (i.e. mQAcomplete) are shown in Table 2. All of the three variants performs worse than our mQA model. 6 Discussion In this paper, we present the mQA model, which is able to give a sentence or a phrase as the answer to a freestyle question for an image. To validate the effectiveness of the method, we construct a Freestyle Multilingual Image Question Answering (FM-IQA) dataset containing over 310,000 question-answer pairs. We evaluate our method using human judges through a real Turing Test. It shows that 64.7% of the answers given by our mQA model are treated as the answers provided by a human. The FM-IQA dataset can be used for other tasks, such as visual machine translation, where the visual information can serve as context information that helps to remove ambiguity of the words in a sentence. We also modified the LSTM in the first component to the multimodal LSTM shown in [25]. This modification allows us to generate a free-style question about the content of image, and provide an answer to this question. We show some sample results in Figure 5. We show some failure cases of our model in Figure 6. The model sometimes makes mistakes when the commonsense reasoning through background scenes is incorrect (e.g., for the image in the first column, our method says that the man is surfing but the small yellow frisbee in the image indicates that he is actually trying to catch the frisbee. It also makes mistakes when the targeting object that the question focuses on is too small or looks very similar to other objects (e.g. images in the second and fourth column). Another interesting example is the image and question in the fifth column of Figure 6. Answering this question is very hard since it needs high level reasoning based on the experience from everyday life. Our model outputs a ⟨OOV ⟩sign, which is a special word we use when the model meets a word which it has not seen before (i.e. does not appear in its word dictionary). In future work, we will try to address these issues by incorporating more visual and linguistic information (e.g. using object detection or using attention models). Image Question GT Answer 帅哥在干什么? What is the handsome boy doing? 在抓飞盘。 Trying to catch the frisbee. 这是什么? What is there in the image? 这是牛。 They are buffalos. 这是什么车? What is the type of the vehicle? 火车。 Train. 盘子里有什么水果 ? Which fruit is there in the plate? 苹果和橙子 。 Apples and oranges. mQA Answer 冲浪 。 Surfing. 草原上的马群 。 Horses on the grassland. 香蕉和橙子 。 Bananas and oranges. 公交汽车。 Bus. 公交车停在那干吗? Why does the bus park there? 准备维修。 Preparing for repair. <OOV>。 <OOV> (I do not know.) Figure 6: Failure cases of our mQA model on the FM-IQA dataset. 8 References [1] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick, and D. Parikh. Vqa: Visual question answering. arXiv preprint arXiv:1505.00468, 2015. [2] J. P. Bigham, C. Jayant, H. Ji, G. Little, A. Miller, R. C. Miller, R. Miller, A. Tatarowicz, B. White, S. White, et al. Vizwiz: nearly real-time answers to visual questions. In ACM symposium on User interface software and technology, pages 333–342, 2010. [3] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Semantic image segmentation with deep convolutional nets and fully connected crfs. ICLR, 2015. [4] X. Chen and C. L. Zitnick. Learning a recurrent visual representation for image caption generation. In CVPR, 2015. [5] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. Learning phrase representations using rnn encoderdecoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [6] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR, 2015. [7] J. L. Elman. Finding structure in time. Cognitive science, 14(2):179–211, 1990. [8] H. Fang, S. Gupta, F. Iandola, R. Srivastava, L. Deng, P. Doll´ar, J. Gao, X. He, M. Mitchell, J. Platt, et al. From captions to visual concepts and back. In CVPR, 2015. [9] D. Geman, S. Geman, N. Hallonquist, and L. Younes. Visual turing test for computer vision systems. PNAS, 112(12):3618–3623, 2015. [10] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [11] M. Grubinger, P. Clough, H. M¨uller, and T. Deselaers. The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In International Workshop OntoImage, pages 13–23, 2006. [12] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [13] N. Kalchbrenner and P. Blunsom. Recurrent continuous translation models. In EMNLP, pages 1700–1709, 2013. [14] A. Karpathy and L. Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In CVPR, 2015. [15] R. Kiros, R. Salakhutdinov, and R. S. Zemel. Unifying visual-semantic embeddings with multimodal neural language models. TACL, 2015. [16] B. Klein, G. Lev, G. Sadeh, and L. Wolf. Fisher vectors derived from hybrid gaussian-laplacian mixture models for image annotation. arXiv preprint arXiv:1411.7399, 2014. [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [18] A. Lavie and A. Agarwal. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgements. In Workshop on Statistical Machine Translation, pages 228–231. Association for Computational Linguistics, 2007. [19] R. Lebret, P. O. Pinheiro, and R. Collobert. Simple image description generator via a linear phrase-based approach. arXiv preprint arXiv:1412.8419, 2014. [20] Y. A. LeCun, L. Bottou, G. B. Orr, and K.-R. M¨uller. Efficient backprop. In Neural networks: Tricks of the trade, pages 9–48. 2012. [21] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Doll´ar, and C. L. Zitnick. Microsoft coco: Common objects in context. arXiv preprint arXiv:1405.0312, 2014. [22] M. Malinowski and M. Fritz. A multi-world approach to question answering about real-world scenes based on uncertain input. In Advances in Neural Information Processing Systems, pages 1682–1690, 2014. [23] M. Malinowski, M. Rohrbach, and M. Fritz. Ask your neurons: A neural-based approach to answering questions about images. arXiv preprint arXiv:1505.01121, 2015. [24] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Deep captioning with multimodal recurrent neural networks (m-rnn). In ICLR, 2015. [25] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. Yuille. Learning like a child: Fast novel visual concept learning from sentence descriptions of images. arXiv preprint arXiv:1504.06692, 2015. [26] J. Mao, W. Xu, Y. Yang, J. Wang, and A. L. Yuille. Explain images with multimodal recurrent neural networks. NIPS DeepLearning Workshop, 2014. [27] T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. Ranzato. Learning longer memory in recurrent neural networks. arXiv preprint arXiv:1412.7753, 2014. [28] T. Mikolov, M. Karafi´at, L. Burget, J. Cernock`y, and S. Khudanpur. Recurrent neural network based language model. In INTERSPEECH, pages 1045–1048, 2010. [29] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In NIPS, pages 3111–3119, 2013. [30] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In ICML, pages 807–814, 2010. [31] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL, pages 311–318, 2002. [32] M. Ren, R. Kiros, and R. Zemel. Image question answering: A visual semantic embedding model and a new dataset. arXiv preprint arXiv:1505.02074, 2015. [33] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge, 2014. [34] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In ICLR, 2015. [35] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In NIPS, pages 3104–3112, 2014. [36] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. arXiv preprint arXiv:1409.4842, 2014. [37] K. Tu, M. Meng, M. W. Lee, T. E. Choe, and S.-C. Zhu. Joint video and text parsing for understanding events and answering queries. MultiMedia, IEEE, 21(2):42–70, 2014. [38] A. M. Turing. Computing machinery and intelligence. Mind, pages 433–460, 1950. [39] R. Vedantam, C. L. Zitnick, and D. Parikh. Cider: Consensus-based image description evaluation. In CVPR, 2015. [40] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and tell: A neural image caption generator. In CVPR, 2015. [41] Z. Wu and M. Palmer. Verbs semantics and lexical selection. In ACL, pages 133–138, 1994. [42] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044, 2015. [43] P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. In ACL, pages 479–488, 2014. [44] J. Zhu, J. Mao, and A. L. Yuille. Learning from weakly supervised data by the expectation loss svm (e-svm) algorithm. In NIPS, pages 1125–1133, 2014. 9 | 2015 | 397 |
5,921 | The Pseudo-Dimension of Near-Optimal Auctions Jamie Morgenstern⇤ Computer and Information Science University of Pennsylvania Philadelphia, PA jamiemor@cis.upenn.edu Tim Roughgarden Stanford University Palo Alto, CA tim@cs.stanford.edu Abstract This paper develops a general approach, rooted in statistical learning theory, to learning an approximately revenue-maximizing auction from data. We introduce t-level auctions to interpolate between simple auctions, such as welfare maximization with reserve prices, and optimal auctions, thereby balancing the competing demands of expressivity and simplicity. We prove that such auctions have small representation error, in the sense that for every product distribution F over bidders’ valuations, there exists a t-level auction with small t and expected revenue close to optimal. We show that the set of t-level auctions has modest pseudodimension (for polynomial t) and therefore leads to small learning error. One consequence of our results is that, in arbitrary single-parameter settings, one can learn a mechanism with expected revenue arbitrarily close to optimal from a polynomial number of samples. 1 Introduction In the traditional economic approach to identifying a revenue-maximizing auction, one first posits a prior distribution over all unknown information, and then solves for the auction that maximizes expected revenue with respect to this distribution. The first obstacle to making this approach operational is the difficulty of formulating an appropriate prior. The second obstacle is that, even if an appropriate prior distribution is available, the corresponding optimal auction can be far too complex and unintuitive for practical use. This motivates the goal of identifying auctions that are “simple” and yet nearly-optimal in terms of expected revenue. In this paper, we apply tools from learning theory to address both of these challenges. In our model, we assume that bidders’ valuations (i.e., “willingness to pay”) are drawn from an unknown distribution F. A learning algorithm is given i.i.d. samples from F. For example, these could represent the outcomes of comparable transactions that were observed in the past. The learning algorithm suggests an auction to use for future bidders, and its performance is measured by comparing the expected revenue of its output auction to that earned by the optimal auction for the distribution F. The possible outputs of the learning algorithm correspond to some set C of auctions. We view C as a design parameter that can be selected by a seller, along with the learning algorithm. A central goal of this work is to identify classes C that balance representation error (the amount of revenue sacrificed by restricting to auctions in C) with learning error (the generalization error incurred by learning over C from samples). That is, we seek a set C that is rich enough to contain an auction that closely approximates an optimal auction (whatever F might be), yet simple enough that the best auction in C can be learned from a small amount of data. Learning theory offers tools both for rigorously defining the “simplicity” of a set C of auctions, through well-known complexity measures such as the ⇤Part of this work done while visiting Stanford University. Partially supported by a Simons Award for Graduate Students in Theoretical Computer Science, as well as NSF grant CCF-1415460. 1 pseudo-dimension, and for quantifying the amount of data necessary to identify the approximately best auction from C. Our goal of learning a near-optimal auction also requires understanding the representation error of different classes C; this task is problem-specific, and we develop the necessary arguments in this paper. 1.1 Our Contributions The primary contributions of this paper are the following. First, we show that well-known concepts from statistical learning theory can be directly applied to reason about learning from data an approximately revenue-maximizing auction. Precisely, for a set C of auctions and an arbitrary unknown distribution F over valuations in [1, H], O( H2 ✏2 dC log H ✏) samples from F are enough to learn (up to a 1 −✏factor) the best auction in C, where dC denotes the pseudo-dimension of the set C (defined in Section 2). Second, we introduce the class of t-level auctions, to interpolate smoothly between simple auctions, such as welfare maximization subject to individualized reserve prices (when t = 1), and the complex auctions that can arise as optimal auctions (as t ! 1). Third, we prove that in quite general auction settings with n bidders, the pseudo-dimension of the set of t-level auctions is O(nt log nt). Fourth, we quantify the number t of levels required for the set of t-level auctions to have low representation error, with respect to the optimal auctions that arise from arbitrary product distributions F. For example, for single-item auctions and several generalizations thereof, if t = ⌦( H ✏), then for every product distribution F there exists a t-level auction with expected revenue at least 1 −✏times that of the optimal auction for F. In the above sense, the “t” in t-level auctions is a tunable “sweet spot”, allowing a designer to balance the competing demands of expressivity (to achieve near-optimality) and simplicity (to achieve learnability). For example, given a fixed amount of past data, our results indicate how much auction complexity (in the form of the number of levels t) one can employ without risking overfitting the auction to the data. Alternatively, given a target approximation factor 1 −✏, our results give sufficient conditions on t and consequently on the number of samples needed to achieve this approximation factor. The resulting sample complexity upper bound has polynomial dependence on H, ✏−1, and the number n of bidders. Known results [1, 8] imply that any method of learning a (1 −✏)-approximate auction from samples must have sample complexity with polynomial dependence on all three of these parameters, even for single-item auctions. 1.2 Related Work The present work shares much of its spirit and high-level goals with Balcan et al. [4], who proposed applying statistical learning theory to the design of near-optimal auctions. The first-order difference between the two works is that our work assumes bidders’ valuations are drawn from an unknown distribution, while Balcan et al. [4] study the more demanding “prior-free” setting. Since no auction can achieve near-optimal revenue ex-post, Balcan et al. [4] define their revenue benchmark with respect to a set G of auctions on each input v as the maximum revenue obtained by any auction of G on v. The idea of learning from samples enters the work of Balcan et al. [4] through the internal randomness of their partitioning of bidders, rather than through an exogenous distribution over inputs (as in this work). Both our work and theirs requires polynomial dependence on H, 1 ✏: ours in terms of a necessary number of samples, and theirs in terms of a necessary number of bidders; as well as a measure of the complexity of the class G (in our case, the pseudo-dimension, and in theirs, an analagous measure). The primary improvement of our work over of the results in Balcan et al. [4] is that our results apply for single item-auctions, matroid feasibility, and arbitrary singleparameter settings (see Section 2 for definitions); while their results apply only to single-parameter settings of unlimited supply.1 We also view as a feature the fact that our sample complexity upper bounds can be deduced directly from well-known results in learning theory — we can focus instead on the non-trivial and problem-specific work of bounding the pseudo-dimension and representation error of well-chosen auction classes. Elkind [12] also considers a similar model to ours, but only for the special case of single-item auctions. While her proposed auction format is similar to ours, our results cover the far more general 1See Balcan et al. [3] for an extension to the case of a large finite supply. 2 case of arbitrary single-parameter settings and and non-finite support distributions; our sample complexity bounds are also better even in the case of a single-item auction (linear rather than quadratic dependence on the number of bidders). On the other hand, the learning algorithm in [12] (for singleitem auctions) is computationally efficient, while ours is not. Cole and Roughgarden [8] study single-item auctions with n bidders with valuations drawn from independent (not necessarily identical) “regular” distributions (see Section 2), and prove upper and lower bounds (polynomial in n and ✏−1) on the sample complexity of learning a (1−✏)-approximate auction. While the formalism in their work is inspired by learning theory, no formal connections are offered; in particular, both their upper and lower bounds were proved from scratch. Our positive results include single-item auctions as a very special case and, for bounded or MHR valuations, our sample complexity upper bounds are much better than those in Cole and Roughgarden [8]. Huang et al. [15] consider learning the optimal price from samples when there is a single buyer and a single seller; this problem was also studied implicitly in [10]. Our general positive results obviously cover the bounded-valuation and MHR settings in [15], though the specialized analysis in [15] yields better (indeed, almost optimal) sample complexity bounds, as a function of ✏−1 and/or H. Medina and Mohri [17] show how to use a combination of the pseudo-dimension and Rademacher complexity to measure the sample complexity of selecting a single reserve price for the VCG mechanism to optimize revenue. In our notation, this corresponds to analyzing a single set C of auctions (VCG with a reserve). Medina and Mohri [17] do not address the expressivity vs. simplicity trade-off that is central to this paper. Dughmi et al. [11] also study the sample complexity of learning good auctions, but their main results are negative (exponential sample complexity), for the difficult scenario of multi-parameter settings. (All settings in this paper are single-parameter.) Our work on t-level auctions also contributes to the literature on simple approximately revenuemaximizing auctions (e.g., [6, 14, 7, 9, 21, 24, 2]). Here, one takes the perspective of a seller who knows the valuation distribution F but is bound by a “simplicity constraint” on the auction deployed, thereby ruling out the optimal auction. Our results that bound the representation error of t-level auctions (Theorems 3.4, 4.1, 5.4, and 6.2) can be interpreted as a principled way to trade off the simplicity of an auction with its approximation guarantee. While previous work in this literature generally left the term “simple” safely undefined, this paper effectively proposes the pseudo-dimension of an auction class as a rigorous and quantifiable simplicity measure. 2 Preliminaries This section reviews useful terminology and notation standard in Bayesian auction design and learning theory. Bayesian Auction Design We consider single-parameter settings with n bidders. This means that each bidder has a single unknown parameter, its valuation or willingness to pay for “winning.” (Every bidder has value 0 for losing.) A setting is specified by a collection X of subsets of {1, 2, . . . , n}; each such subset represent a collection of bidders that can simultaneously “win.” For example, in a setting with k copies of an item, where no bidder wants more than one copy, X would be all subsets of {1, 2, . . . , n} of cardinality at most k. A generalization of this case, studied in the supplementary materials (Section 5), is matroid settings. These satisfy: (i) whenever X 2 X and Y ✓X, Y 2 X; and (ii) for two sets |I1| < |I2|, I1, I2 2 X, there is always an augmenting element i2 2 I2 \ I1 such that I1 [ {i2} 2 X, X. The supplementary materials (Section 6) also consider arbitrary single-parameter settings, where the only assumption is that ; 2 X. To ease comprehension, we often illustrate our main ideas using single-item auctions (where X is the singletons and the empty set). We assume bidders’ valuations are drawn from the continuous joint cumulative distribution F. Except in the extension in Section 4, we assume that the support of F is limited to [1, H]n. As in most of optimal auction theory [18], we usually assume that F is a product distribution, with F = F1 ⇥F2 ⇥. . . ⇥Fn and each vi ⇠Fi drawn independently but not identically. The virtual 3 value of bidder i is denoted by φi(vi) = vi −1−Fi(vi) fi(vi) . A distribution satisfies the monotone-hazard rate (MHR) condition if fi(vi)/(1 −Fi(vi)) is nondecreasing; intuitively, if its tails are no heavier than those of an exponential distribution. In a fundamental paper, [18] proved that when every virtual valuation function is nondecreasing (the “regular” case), the auction that maximizes expected revenue for n Bayesian bidders chooses winners in a way which maximizes the sum of the virtual values of the winners. This auction is known as Myerson’s auction, which we refer to as M. The result can be extended to the general, “non-regular” case by replacing the virtual valuation functions by “ironed virtual valuation functions.” The details are well-understood but technical; see Myerson [18] and Hartline [13] for details. Sample Complexity, VC Dimension, and the Pseudo-Dimension This section reviews several well-known definitions from learning theory. Suppose there is some domain Q, and let c be some unknown target function c : Q ! {0, 1}. Let D be an unknown distribution over Q. We wish to understand how many labeled samples (x, c(x)), x ⇠D, are necessary and sufficient to be able to output a ˆc which agrees with c almost everywhere with respect to D. The distribution-independent sample complexity of learning c depends fundamentally on the “complexity” of the set of binary functions C from which we are choosing ˆc. We define the relevant complexity measure next. Let S be a set of m samples from Q. The set S is said to be shattered by C if, for every subset T ✓S, there is some cT 2 C such that cT (x) = 1 if x 2 T and cT (y) = 0 if y /2 T. That is, ranging over all c 2 C induces all 2|S| possible projections onto S. The VC dimension of C, denoted VC(C), is the size of the largest set S that can be shattered by C. Let errS(ˆc) = (P x2S |c(x) −ˆc(x)|)/|S| denote the empirical error of ˆc on S, and let err(ˆc) = Ex⇠D[|c(x)−ˆc(x)|] denote the true expected error of ˆc with respect to D. A key result from learning theory [23] is: for every distribution D, a sample S of size ⌦(✏−2(VC(C) + ln 1 δ )) is sufficient to guarantee that errS(ˆc) 2 [err(ˆc) −✏, err(ˆc) + ✏] for every ˆc 2 C with probability 1 −δ. In this case, the error on the sample is close to the true error, simultaneously for every hypothesis in C. In particular, choosing the hypothesis with the minimum sample error minimizes the true error, up to 2✏. We say C is (✏, δ)-uniformly learnable with sample complexity m if, given a sample S of size m, with probability 1 −δ, for all c 2 C, |errS(c) −err(c)| < ✏: thus, any class C is (✏, δ)-uniformly learnable with m = ⇥ " 1 ✏2 " VC(C) + ln 1 δ ## samples. Conversely, for every learning algorithm A that uses fewer than VC(C) ✏ samples, there exists a distribution D0 and a constant q such that, with probability at least q, A outputs a hypothesis ˆc0 2 C with err(ˆc0) > err(ˆc) + ✏ 2 for some ˆc 2 C. That is, the true error of the output hypothesis is more than ✏ 2 larger the best hypothesis in the class. To learn real-valued functions, we need a generalization of VC dimension (which concerns binary functions). The pseudo-dimension [19] does exactly this.2 Formally, let c : Q ! [0, H] be a realvalued function over Q, and C the class we are learning over. Let S be a sample drawn from D, |S| = m, labeled according to c. Both the empirical and true error of a hypothesis ˆc are defined as before, though |ˆc(x) −c(x)| can now take on values in [0, H] rather than in {0, 1}. Let (r1, . . . , rm) 2 [0, H]m be a set of targets for S. We say (r1, . . . , rm) witnesses the shattering of S by C if, for each T ✓S, there exists some cT 2 C such that fT (xi) ≥ri for all xi 2 T and cT (xi) < ri for all xi /2 T. If there exists some ~r witnessing the shattering of S, we say S is shatterable by C. The pseudo-dimension of C, denoted dC, is the size of the largest set S which is shatterable by C. The sample complexity upper bounds of this paper are derived from the following theorem, which states that the distribution-independent sample complexity of learning over a class of real-valued functions C is governed by the class’s pseudo-dimension. Theorem 2.1 [E.g. [1]] Suppose C is a class of real-valued functions with range in [0, H] and pseudo-dimension dC. For every ✏> 0, δ 2 [0, 1], the sample complexity of (✏, δ)-uniformly learning f with respect to C is m = O ⇣" H ✏ #2 " dC ln " H ✏ # + ln " 1 δ ##⌘ . Moreover, the guarantee in Theorem 2.1 is realized by the learning algorithm that simply outputs the function c 2 C with the smallest empirical error on the sample. 2The fat-shattering dimension is a weaker condition that is also sufficient for sample complexity bounds. All of our arguments give the same upper bounds on the pseudo-dimension and the fat-shattering dimension of various auction classes, so we present the stronger statements. 4 Applying Pseudo-Dimension to Auction Classes For the remainder of this paper, we consider classes of truthful auctions C.3 When we discuss some auction c 2 C, we treat c : [0, H]n ! R as the function that maps (truthful) bid tuples to the revenue achieved on them by the auction c. Then, rather than minimizing error, we aim to maximize revenue. In our setting, the guarantee of Theorem 2.1 directly implies that, with probability at least 1 −δ (over the m samples), the output of the empirical revenue maximization learning algorithm — which returns the auction c 2 C with the highest average revenue on the samples — chooses an auction with expected revenue (over the true underlying distribution F) that is within an additive ✏of the maximum possible. 3 Single-Item Auctions To illustrate out ideas, we first focus on single-item auctions. The results of this section are generalized significantly in the supplementary (see Sections 5 and 6). Section 3.1 defines the class of t-level single-item auctions, gives an example, and interprets the auctions as approximations to virtual welfare maximizers. Section 3.2 proves that the pseudo-dimension of the set of such auctions is O(nt log nt), which by Theorem 2.1 implies a sample-complexity upper bound. Section 3.3 proves that taking t = ⌦( H ✏) yields low representation error. 3.1 t-Level Auctions: The Single-Item Case We now introduce t-level auctions, or Ct for short. Intuitively, one can think of each bidder as facing one of t possible prices; the price they face depends upon the values of the other bidders. Consider, for each bidder i, t numbers 0 `i,0 `i,1 . . . `i,t−1. We refer to these t numbers as thresholds. This set of tn numbers defines a t-level auction with the following allocation rule. Consider a valuation tuple v: 1. For each bidder i, let ti(vi) denote the index ⌧of the largest threshold `i,⌧that lower bounds vi (or -1 if vi < `i,0). We call ti(vi) the level of bidder i. 2. Sort the bidders from highest level to lowest level and, within a level, use a fixed lexicographical tie-breaking ordering ≻to pick the winner.4 3. Award the item to the first bidder in this sorted order (unless ti = −1 for every bidder i, in which case there is no sale). The payment rule is the unique one that renders truthful bidding a dominant strategy and charges 0 to losing bidders — the winning bidder pays the lowest bid at which she would continue to win. It is important for us to understand this payment rule in detail; there are three interesting cases. Suppose bidder i is the winner. In the first case, i is the only bidder who might be allocated the item (other bidders have level -1), in which case her bid must be at least her lowest threshold. In the second case, there are multiple bidders at her level, so she must bid high enough to be at her level (and, since ties are broken lexicographically, this is her threshold to win). In the final case, she need not compete at her level: she can choose to either pay one level above her competition (in which case her position in the tie-breaking ordering does not matter) or she can bid at the same level as her highest-level competitors (in which case she only wins if she dominates all of those bidders at the next-highest level according to ≻). Formally, the payment p of the winner i (if any) is as follows. Let ¯⌧denote the highest level ⌧such that there at least two bidders at or above level ⌧, and I be the set of bidders other than i whose level is at least ¯⌧. Monop If ¯⌧= −1, then pi = `i,0 (she is the only potential winner, but must have level ≥0 to win). Mult If ti(vi) = ¯⌧then pi = `i,¯⌧(she needs to be at level ¯⌧). 3An auction is truthful if truthful bidding is a dominant strategy for every bidder. That is: for every bidder i, and all possible bids by the other bidders, i maximizes its expected utility (value minus price paid) by bidding its true value. In the single-parameter settings that we study, the expected revenue of the optimal non-truthful auction (measured at a Bayes-Nash equilibrium with respect to the prior distribution) is no larger than that of the optimal truthful auction. 4When the valuation distributions are regular, this tie-breaking can be done by value, or randomly; when it is done by value, this equates to a generalization of VCG with nonanonymous reserves (and is IC and has identical representation error as this analysis when bidders are regular). 5 Unique If ti(vi) > ¯⌧, if i ≻i0 for all i0 2 I, she pays pi = `i,¯⌧, otherwise she pays pi = `i,¯⌧+1 (she either needs to be at level ¯⌧+ 1, in which case her position in ≻does not matter, or at level ¯⌧, in which case she would need to be the highest according to ≻). We now describle a particular t-level auction, and demonstrate each case of the payment rule. Example 3.1 Consider the following 4-level auction for bidders a, b, c. Let `a,· = [2, 4, 6, 8], `b,· = [1.5, 5, 9, 10], and `c,· = [1.7, 3.9, 6, 7]. For example, if bidder a bids less than 2 she is at level −1, a bid in [2, 4) puts her at level 0, a bid in [4, 6) at level 1, a bid in [6, 8) at level 2, and a bid of at least 8 at level 3. Let a ≻b ≻c. Monop If va = 3, vb < 1.5, vc < 1.7, then b, c are at level −1 (to which the item is never allocated). So, a wins and pays 2, the minimum she needs to bid to be at level 0. Mult If va ≥8, vb ≥10, vc < 7, then a and b are both at level 3, and a ≻b, so a will win and pays 8 (the minimum she needs to bid to be at level 3). Unique If va ≥8, vb 2 [5, 9], vc 2 [3.9, 6], then a is at level 3, and b and c are at level 1. Since a ≻b and a ≻c, a need only pay 4 (enough to be at level 1). If, on the other hand, va 2 [4, 6], vb = [5, 9] and vc ≥6, c has level at least 2 (while a, b have level 1), but c needs to pay 6 since a, b ≻c. Remark 3.2 (Connection to virtual valuation functions) t-level auctions are naturally interpreted as discrete approximations to virtual welfare maximizers, and our representation error bound in Theorem 3.4 makes this precise. Each level corresponds to a constraint of the form “If any bidder has level at least ⌧, do not sell to any bidder with level less than ⌧.” We can interpret the `i,⌧’s (with fixed ⌧, ranging over bidders i) as the bidder values that map to some common virtual value. For example, 1-level auctions treat all values below the single threshold as having negative virtual value, and above the threshold uses values as proxies for virtual values. 2-level auctions use the second threshold to the refine virtual value estimates, and so on. With this interpretation, it is intuitively clear that as t ! 1, it is possible to estimate bidders’ virtual valuation functions and thus approximate Myerson’s optimal auction to arbitrary accuracy. 3.2 The Pseudo-Dimension of t-Level Auctions This section shows that the pseudo-dimension of the class of t-level single-item auctions with n bidders is O(nt log nt). Combining this with Theorem 2.1 immediately yields sample complexity bounds (parameterized by t) for learning the best such auction from samples. Theorem 3.3 For a fixed tie-breaking order, the set of n-bidder single-item t-level auctions has pseudo-dimension O (nt log(nt)). Proof: Recall from Section 2 that we need to upper bound the size of every set that is shatterable using t-level auctions. Fix a set of samples S = " v1, . . . , vm# of size m and a potential witness R = " r1, . . . , rm# . Each auction c induces a binary labeling of the samples vj of S (whether c’s revenue on vj is at least rj or strictly less than rj). The set S is shattered with witness R if and only if the number of distinct labelings of S given by any t-level auction is 2m. We upper-bound the number of distinct labelings of S given by t-level auctions (for some fixed potential witness R), counting the labelings in two stages. Note that S involves nm numbers — one value vj i for each bidder for each sample, and a t-level auction involves nt numbers — t thresholds `i,⌧for each bidder. Call two t-level auctions with thresholds {`i,⌧} and {ˆ`i,⌧} equivalent if 1. The relative order of the `i,⌧’s agrees with that of the ˆ`i,⌧’s, in that both induce the same permutation of {1, 2, . . . , n} ⇥{0, 1, . . . , t −1}. 2. merging the sorted list of the vj i ’s with the sorted list of the `i,⌧’s yields the same partition of the vj i ’s as does merging it with the sorted list of the ˆ`i,⌧’s. Note that this is an equivalence relation. If two t-level auctions are equivalent, every comparison between a valuation and a threshold or two valuations is resolved identically by those auctions. 6 Using the defining properties of equivalence, a crude upper bound on the number of equivalence classes is (nt)! · ✓nm + nt nt ◆ (nm + nt)2nt. (1) We now upper-bound the number of distinct labelings of S that can be generated by t-level auctions in a single equivalence class C. First, as all comparisons between two numbers (valuations or thresholds) are resolved identically for all auctions in C, each bidder i in each sample vj of S is assigned the same level (across auctions in C), and the winner (if any) in each sample vj is constant across all of C. By the same reasoning, the identity of the parameter that gives the winner’s payment (some `i,⌧) is uniquely determined by pairwise comparisons (recall Section 3.1) and hence is common across all auctions in C. The payments `i,⌧, however, can vary across auctions in the equivalence class. For a bidder i and level ⌧2 {0, 1, 2, . . . , t−1}, let Si,⌧✓S be the subset of samples in which bidder i wins and pays `i,⌧. The revenue obtained by each auction in C on a sample of Si,⌧is simply `i,⌧ (and independent of all other parameters of the auction). Thus, ranging over all t-level auctions in C generates at most |Si,⌧| distinct binary labelings of Si,⌧— the possible subsets of Si,⌧for which an auction meets the corresponding target rj form a nested collection. Summarizing, within the equivalence class C of t-level auctions, varying a parameter `i,⌧generates at most |Si,⌧| different labelings of the samples Si,⌧and has no effect on the other samples. Since the subsets {Si,⌧}i,⌧are disjoint, varying all of the `i,⌧’s (i.e., ranging over C) generates at most n Y i=1 t−1 Y ⌧=0 |Si,⌧| mnt (2) distinct labelings of S. Combining (1) and (2), the class of all t-level auctions produces at most (nm + nt)3nt distinct labelings of S. Since shattering S requires 2m distinct labelings, we conclude that 2m (nm + nt)3nt, implying m = O(nt log nt) as claimed. ⌅ 3.3 The Representation Error of Single-Item t-Level Auctions In this section, we show that for every bounded product distribution, there exists a t-level auction with expected revenue close to that of the optimal single-item auction when bidders are independent and bounded. The analsysis “rounds” an optimal auction to a t-level auction without losing much expected revenue. This is done using thresholds to approximate each bidder’s virtual value: the lowest threshold at the bidder’s monopoly reserve price, the next 1 ✏thresholds at the values at which bidder i’s virtual value surpasses multiples of ✏, and the remaining thresholds at those values where bidder i’s virtual value reaches powers of 1 + ✏. Theorem 3.4 formalizes this intuition. Theorem 3.4 Suppose F is distribution over [1, H]n. If t = ⌦ " 1 ✏+ log1+✏H # , Ct contains a single-item auction with expected revenue at least 1 −✏times the optimal expected revenue. Theorem 3.4 follows immediately from the following lemma, with ↵= γ = 1. We prove this more general result for later use. Lemma 3.5 Consider n bidders with valuations in [0, H] and with P[maxi vi > ↵] ≥γ. Then, Ct contains a single-item auction with expected revenue at least a 1 −✏times that of an optimal auction, for t = ⇥ ⇣ 1 γ✏+ log1+✏ H ↵ ⌘ . Proof: Consider a fixed bidder i. We define t thresholds for i, bucketing i by her virtual value, and prove that the t-level auction A using these thresholds for each bidder closely approximates the expected revenue of the optimal auction M. Let ✏0 be a parameter defined later. 7 Set `i,0 = φ−1 i (0), bidder i’s monopoly reserve.5 For ⌧2 [1, d 1 γ✏0 e], let `i,⌧= φ−1 i (⌧· ↵γ✏0) (φi 2 [0, ↵]). For ⌧2 [d 1 γ✏0 e, d 1 γ✏0 e + dlog1+ ✏ 2 H ↵e], let `i,⌧= φ−1 i (↵(1 + ✏ 2)⌧−d 1 γ✏0 e) (φi > ↵). Consider a fixed valuation profile v. Let i⇤denote the winner according to A, and i 0 the winner according to the optimal auction M. If there is no winner, we interpret φi⇤(vi⇤) and φi0 (vi0 ) as 0. Recall that M always awards the item to a bidder with the highest positive virtual value (or no one, if no such bidders exist). The definition of the thresholds immediately implies the following. 1. A only allocates to non-negative ironed virtual-valued bidders. 2. If there is no tie (that is, there is a unique bidder at the highest level), then i 0 = i⇤. 3. When there is a tie at level ⌧, the virtual value of the winner of A is close to that of M: If ⌧2 [0, d 1 γ✏0 e] then φi0 (vi0 ) −φi⇤(vi⇤) ↵γ✏0; if ⌧2 [d 1 γ✏0 e, d 1 γ✏0 e + dlog1+ ✏ 2 H ↵e], φi⇤(vi⇤) φi0 (vi0 ) ≥1 −✏ 2. These facts imply that Ev[Rev(A)] = Ev[φi⇤(vi⇤)] ≥(1−✏ 2)·Ev[φi0 (vi0 )]−↵γ✏0 = (1−✏ 2)·Ev[Rev(M)]−↵γ✏0. (3) are equal. The first and final equality follow from A and M’s allocations depending on ironed virtual values, not on the values themselves, thus, the ironed virtual values are equal in expectation to the unironed virtual values, and thus the revenue of the mechanisms (see [13], Chapter 3.5 for discussion). As P[maxi vi > ↵] ≥γ, it must be that E[Rev(M)] ≥↵γ (a posted price of ↵will achieve this revenue). Combining this with (3), and setting ✏0 = ✏ 2 implies Ev[Rev(A)] ≥(1 −✏) Ev[Rev(M)]. ⌅ Combining Theorems 2.1 and 3.4 yields the following Corollary 3.6. Corollary 3.6 Let F be a product distribution with all bidders’ valuations in [1, H]. Assume that t = ⇥ " 1 ✏+ log1+✏H # and m = O ⇣" H ✏ #2 " nt log (nt) log H ✏+ log 1 δ #⌘ = ˜O ⇣ H2n ✏3 ⌘ . Then with probability at least 1 −δ, the single-item empirical revenue maximizer of Ct on a set of m samples from F has expected revenue at least 1 −✏times that of the optimal auction. Open Questions There are some significant opportunities for follow-up research. First, there is much to do on the design of computationally efficient (in addition to sample-efficient) algorithms for learning a nearoptimal auction. The present work focuses on sample complexity, and our learning algorithms are generally not computationally efficient.6 The general research agenda here is to identify auction classes C for various settings such that: 1. C has low representation error; 2. C has small pseudo-dimension; 3. There is a polynomial-time algorithm to find an approximately revenue-maximizing auction from C on a given set of samples.7 There are also interesting open questions on the statistical side, notably for multi-parameter problems. While the negative result in [11] rules out a universally good upper bound on the sample complexity of learning a near-optimal mechanism in multi-parameter settings, we suspect that positive results are possible for several interesting special cases. 5Recall from Section 2 that φi denotes the virtual valuation function of bidder i. (From here on, we always mean the ironed version of virtual values.) It is convenient to assume that these functions are strictly increasing (not just nondecreasing); this can be enforced at the cost of losing an arbitrarily small amount of revenue. 6There is a clear parallel with computational learning theory [22]: while the information-theoretic foundations of classification (VC dimension, etc. [23]) have been long understood, this research area strives to understand which low-dimensional concept classes are learnable in polynomial time. 7The sample-complexity and performance bounds implied by pseudo-dimension analysis, as in Theorem 2.1, hold with such an approximation algorithm, with the algorithm’s approximation factor carrying through to the learning algorithm’s guarantee. See also [4, 11]. 8 References [1] Martin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, NY, NY, USA, 1999. [2] Moshe Babaioff, Nicole Immorlica, Brendan Lucier, and S. Matthew Weinberg. A simple and approximately optimal mechanism for an additive buyer. SIGecom Exch., 13(2):31–35, January 2015. [3] Maria-Florina Balcan, Avrim Blum, and Yishay Mansour. Single price mechanisms for revenue maximization in unlimited supply combinatorial auctions. Technical report, Carnegie Mellon University, 2007. [4] Maria-Florina Balcan, Avrim Blum, Jason D Hartline, and Yishay Mansour. Reducing mechanism design to algorithm design via machine learning. Jour. of Comp. and System Sciences, 74(8):1245–1270, 2008. [5] Yang Cai and Constantinos Daskalakis. Extreme-value theorems for optimal multidimensional pricing. In Foundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on, pages 522–531, Palm Springs, CA, USA., Oct 2011. IEEE. [6] Shuchi Chawla, Jason Hartline, and Robert Kleinberg. Algorithmic pricing via virtual valuations. In Proceedings of the 8th ACM Conf. on Electronic Commerce, pages 243–251, NY, NY, USA, 2007. ACM. [7] Shuchi Chawla, Jason D. Hartline, David L. Malec, and Balasubramanian Sivan. Multi-parameter mechanism design and sequential posted pricing. In Proceedings of the Forty-second ACM Symposium on Theory of Computing, pages 311–320, NY, NY, USA, 2010. ACM. [8] Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, pages 243–252, NY, NY, USA, 2014. SIAM. [9] Nikhil Devanur, Jason Hartline, Anna Karlin, and Thach Nguyen. Prior-independent multi-parameter mechanism design. In Internet and Network Economics, pages 122–133. Springer, Singapore, 2011. [10] Peerapong Dhangwatnotai, Tim Roughgarden, and Qiqi Yan. Revenue maximization with a single sample. In Proceedings of the 11th ACM Conf. on Electronic Commerce, pages 129–138, NY, NY, USA, 2010. ACM. [11] Shaddin Dughmi, Li Han, and Noam Nisan. Sampling and representation complexity of revenue maximization. In Web and Internet Economics, volume 8877 of Lecture Notes in Computer Science, pages 277–291. Springer Intl. Publishing, Beijing, China, 2014. [12] Edith Elkind. Designing and learning optimal finite support auctions. In Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, pages 736–745. SIAM, 2007. [13] Jason Hartline. Mechanism design and approximation. Jason Hartline, Chicago, Illinois, 2015. [14] Jason D. Hartline and Tim Roughgarden. Simple versus optimal mechanisms. In ACM Conf. on Electronic Commerce, Stanford, CA, USA., 2009. ACM. [15] Zhiyi Huang, Yishay Mansour, and Tim Roughgarden. Making the most of your samples. abs/1407.2479: 1–3, 2014. URL http://arxiv.org/abs/1407.2479. [16] Michael J. Kearns and Umesh V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, Cambridge, MA, 1994. [17] Andres Munoz Medina and Mehryar Mohri. Learning theory and algorithms for revenue optimization in second price auctions with reserve. In Proceedings of The 31st Intl. Conf. on Machine Learning, pages 262–270, 2014. [18] Roger B Myerson. Optimal auction design. Mathematics of operations research, 6(1):58–73, 1981. [19] David Pollard. Convergence of stochastic processes. David Pollard, New Haven, Connecticut, 1984. [20] T. Roughgarden and O. Schrijvers. Ironing in the dark. Submitted, 2015. [21] Tim Roughgarden, Inbal Talgam-Cohen, and Qiqi Yan. Supply-limiting mechanisms. In Proceedings of the 13th ACM Conf. on Electronic Commerce, pages 844–861, NY, NY, USA, 2012. ACM. [22] Leslie G Valiant. A theory of the learnable. Communications of the ACM, 27(11):1134–1142, 1984. [23] Vladimir N Vapnik and A Ya Chervonenkis. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability & Its Applications, 16(2):264–280, 1971. [24] Andrew Chi-Chih Yao. An n-to-1 bidder reduction for multi-item auctions and its applications. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 92–109, San Diego, CA, USA., 2015. ACM. 9 | 2015 | 398 |
5,922 | Fast Second-Order Stochastic Backpropagation for Variational Inference Kai Fan Duke University kai.fan@stat.duke.edu Ziteng Wang∗ HKUST† wangzt2012@gmail.com Jeffrey Beck Duke University jeff.beck@duke.edu James T. Kwok HKUST jamesk@cse.ust.hk Katherine Heller Duke University kheller@gmail.com Abstract We propose a second-order (Hessian or Hessian-free) based optimization method for variational inference inspired by Gaussian backpropagation, and argue that quasi-Newton optimization can be developed as well. This is accomplished by generalizing the gradient computation in stochastic backpropagation via a reparametrization trick with lower complexity. As an illustrative example, we apply this approach to the problems of Bayesian logistic regression and variational auto-encoder (VAE). Additionally, we compute bounds on the estimator variance of intractable expectations for the family of Lipschitz continuous function. Our method is practical, scalable and model free. We demonstrate our method on several real-world datasets and provide comparisons with other stochastic gradient methods to show substantial enhancement in convergence rates. 1 Introduction Generative models have become ubiquitous in machine learning and statistics and are now widely used in fields such as bioinformatics, computer vision, or natural language processing. These models benefit from being highly interpretable and easily extended. Unfortunately, inference and learning with generative models is often intractable, especially for models that employ continuous latent variables, and so fast approximate methods are needed. Variational Bayesian (VB) methods [1] deal with this problem by approximating the true posterior that has a tractable parametric form and then identifying the set of parameters that maximize a variational lower bound on the marginal likelihood. That is, VB methods turn an inference problem into an optimization problem that can be solved, for example, by gradient ascent. Indeed, efficient stochastic gradient variational Bayesian (SGVB) estimators have been developed for auto-encoder models [17] and a number of papers have followed up on this approach [28, 25, 19, 16, 15, 26, 10]. Recently, [25] provided a complementary perspective by using stochastic backpropagation that is equivalent to SGVB and applied it to deep latent gaussian models. Stochastic backpropagation overcomes many limitations of traditional inference methods such as the meanfield or wake-sleep algorithms [12] due to the existence of efficient computations of an unbiased estimate of the gradient of the variational lower bound. The resulting gradients can be used for parameter estimation via stochastic optimization methods such as stochastic gradient decent(SGD) or adaptive version (Adagrad) [6]. ∗Equal Contribution to this work †Refer to Hong Kong University of Science and Technology 1 Unfortunately, methods such as SGD or Adagrad converge slowly for some difficult-to-train models, such as untied-weights auto-encoders or recurrent neural networks. The common experience is that gradient decent always gets stuck near saddle points or local extrema. Meanwhile the learning rate is difficult to tune. [18] gave a clear explanation on why Newton’s method is preferred over gradient decent, which often encounters under-fitting problem if the optimizing function manifests pathological curvature. Newton’s method is invariant to affine transformations so it can take advantage of curvature information, but has higher computational cost due to its reliance on the inverse of the Hessian matrix. This issue was partially addressed in [18] where the authors introduced Hessian free (HF) optimization and demonstrated its suitability for problems in machine learning. In this paper, we continue this line of research into 2nd order variational inference algorithms. Inspired by the property of location scale families [8], we show how to reduce the computational cost of the Hessian or Hessian-vector product, thus allowing for a 2nd order stochastic optimization scheme for variational inference under Gaussian approximation. In conjunction with the HF optimization, we propose an efficient and scalable 2nd order stochastic Gaussian backpropagation for variational inference called HFSGVI. Alternately, L-BFGS [3] version, a quasi-Newton method merely using the gradient information, is a natural generalization of 1st order variational inference. The most immediate application would be to look at obtaining better optimization algorithms for variational inference. As to our knowledge, the model currently applying 2nd order information is LDA [2, 14], where the Hessian is easy to compute [11]. In general, for non-linear factor models like non-linear factor analysis or the deep latent Gaussian models this is not the case. Indeed, to our knowledge, there has not been any systematic investigation into the properties of various optimization algorithms and how they might impact the solutions to optimization problem arising from variational approximations. The main contributions of this paper are to fill such gap for variational inference by introducing a novel 2nd order optimization scheme. First, we describe a clever approach to obtain curvature information with low computational cost, thus making the Newton’s method both scalable and efficient. Second, we show that the variance of the lower bound estimator can be bounded by a dimension-free constant, extending the work of [25] that discussed a specific bound for univariate function. Third, we demonstrate the performance of our method for Bayesian logistic regression and the VAE model in comparison to commonly used algorithms. Convergence rate is shown to be competitive or faster. 2 Stochastic Backpropagation In this section, we extend the Bonnet and Price theorem [4, 24] to develop 2nd order Gaussian backpropagation. Specifically, we consider how to optimize an expectation of the form Eqθ[f(z|x)], where z and x refer to latent variables and observed variables respectively, and expectation is taken w.r.t distribution qθ and f is some smooth loss function (e.g. it can be derived from a standard variational lower bound [1]). Sometimes we abuse notation and refer to f(z) by omitting x when no ambiguity exists. To optimize such expectation, gradient decent methods require the 1st derivatives, while Newton’s methods require both the gradients and Hessian involving 2nd order derivatives. 2.1 Second Order Gaussian Backpropagation If the distribution q is a dz-dimensional Gaussian N(z|µ, C), the required partial derivative is easily computed with a lower algorithmic cost O(d2 z) [25]. By using the property of Gaussian distribution, we can compute the 2nd order partial derivative of EN (z|µ,C)[f(z)] as follows: ∇2 µi,µjEN (z|µ,C)[f(z)] = EN (z|µ,C)[∇2 zi,zjf(z)] = 2∇CijEN (z|µ,C)[f(z)], (1) ∇2 Ci,j,Ck,lEN (z|µ,C)[f(z)] = 1 4EN (z|µ,C)[∇4 zi,zj,zk,zlf(z)], (2) ∇2 µi,Ck,lEN (z|µ,C)[f(z)] = 1 2EN (z|µ,C) ∇3 zi,zk,zlf(z) . (3) Eq. (1), (2), (3) (proof in supplementary) have the nice property that a limited number of samples from q are sufficient to obtain unbiased gradient estimates. However, note that Eq. (2), (3) needs to calculate the third and fourth derivatives of f(z), which is highly computationally inefficient. To avoid the calculation of high order derivatives, we use a co-ordinate transformation. 2 2.2 Covariance Parameterization for Optimization By constructing the linear transformation (a.k.a. reparameterization) z = µ + Rϵ, where ϵ ∼ N(0, Idz), we can generate samples from any Gaussian distribution N(µ, C) by simulating data from a standard normal distribution, provided the decomposition C = RR⊤holds. This fact allows us to derive the following theorem indicating that the computation of 2nd order derivatives can be scalable and programmed to run in parallel. Theorem 1 (Fast Derivative). If f is a twice differentiable function and z follows Gaussian distribution N(µ, C), C = RR⊤, where both the mean µ and R depend on a d-dimensional parameter θ = (θl)d l=1, i.e. µ(θ), R(θ), we have ∇2 µ,REN (µ,C)[f(z)] = Eϵ∼N (0,Idz )[ϵ⊤⊗H], and ∇2 REN (µ,C)[f(z)] = Eϵ∼N (0,Idz )[(ϵϵT ) ⊗H]. This then implies ∇θlEN (µ,C)[f(z)] = Eϵ∼N (0,I) g⊤∂(µ + Rϵ) ∂θl , (4) ∇2 θl1θl2 EN (µ,C)[f(z)] = Eϵ∼N (0,I) " ∂(µ + Rϵ) ∂θl1 ⊤ H∂(µ + Rϵ) ∂θl2 + g⊤∂2(µ + Rϵ) ∂θl1∂l2 # , (5) where ⊗is Kronecker product, and gradient g, Hessian H are evaluated at µ+Rϵ in terms of f(z). If we consider the mean and covariance matrix as the variational parameters in variational inference, the first two results w.r.t µ, R make parallelization possible, and reduce computational cost of the Hessian-vector multiplication due to the fact that (A⊤⊗B)vec(V ) = vec(AV B). If the model has few parameters or a large resource budget (e.g. GPU) is allowed, Theorem 1 launches the foundation for exact 2nd order derivative computation in parallel. In addition, note that the 2nd order gradient computation on model parameter θ only involves matrix-vector or vector-vector multiplication, thus leading to an algorithmic complexity that is O(d2 z) for 2nd order derivative of θ, which is the same as 1st order gradient [25]. The derivative computation at function f is up to 2nd order, avoiding to calculate 3rd or 4th order derivatives. One practical parametrization assumes a diagonal covariance matrix C = diag{σ2 1, ..., σ2 dz}. This reduces the actual computational cost compared with Theorem 1, albeit the same order of the complexity (O(d2 z)) (see supplementary material). Theorem 1 holds for a large class of distributions in addition to Gaussian distributions, such as student t-distribution. If the dimensionality d of embedded parameter θ is large, computation of the gradient Gθ and Hessian Hθ (differ from g, H above) will be linear and quadratic w.r.t d, which may be unacceptable. Therefore, in the next section we attempt to reduce the computational complexity w.r.t d. 2.3 Apply Reparameterization on Second Order Algorithm In standard Newton’s method, we need to compute the Hessian matrix and its inverse, which is intractable for limited computing resources. [18] applied Hessian-free (HF) optimization method in deep learning effectively and efficiently. This work largely relied on the technique of fast Hessian matrix-vector multiplication [23]. We combine reparameterization trick with Hessian-free or quasiNewton to circumvent matrix inverse problem. Hessian-free Unlike quasi-Newton methods HF doesn’t make any approximation on the Hessian. HF needs to compute Hθv, where v is any vector that has the matched dimension to Hθ, and then uses conjugate gradient algorithm to solve the linear system Hθv = −∇F(θ)⊤v, for any objective function F. [18] gives a reasonable explanation for Hessian free optimization. In short, unlike a pre-training method that places the parameters in a search region to regularize[7], HF solves issues of pathological curvature in the objective by taking the advantage of rescaling property of Newton’s method. By definition Hθv = limγ→0 ∇F (θ+γv)−∇F (θ) γ indicating that Hθv can be numerically computed by using finite differences at γ. However, this numerical method is unstable for small γ. In this section, we focus on the calculation of Hθv by leveraging a reparameterization trick. Specifically, we apply an R-operator technique [23] for computing the product Hθv exactly. Let F = EN (µ,C)[f(z)] and reparametrize z again as Sec. 2.2, we do variable substitution θ ←θ + γv after gradient Eq. (4) is obtained, and then take derivative on γ. Thus we have the following analyt3 Algorithm 1 Hessian-free Algorithm on Stochastic Gaussian Variational Inference (HFSGVI) Parameters: Minibatch Size B, Number of samples to estimate the expectation M (= 1 as default), Input: Observation X (and Y if required), Lower bound function L = EN (µ,C)[fL] Output: Parameter θ after having converged. 1: for t = 1, 2, . . . do 2: xB b=1 ←Randomly draw B datapoints from full data set X; 3: ϵM mb=1 ←sample M times from N(0, I) for each xb; 4: Define gradient G(θ) = 1 M P b P mb g⊤ b,m ∂(µ+Rϵmb) ∂θ , gb,m = ∇z(fL(z|xb))|z=µ+Rϵmb ; 5: Define function B(θ, v) = ∇γG(θ + γv)|γ=0, where v is a d-dimensional vector; 6: Using Conjugate Gradient algorithm to solve linear system: B(θt, pt) = −G(θt); 7: θt+1 = θt + pt; 8: end for ical expression for Hessian-vector multiplication: Hθv = ∂ ∂γ ∇F(θ + γv) γ=0 = ∂ ∂γ EN (0,I) " g⊤∂(µ(θ) + R(θ)ϵ) ∂θ θ←θ+γv # γ=0 = EN (0,I) " ∂ ∂γ g⊤∂(µ(θ) + R(θ)ϵ) ∂θ θ←θ+γv !# γ=0 . (6) Eq. (6) is appealing since it does not need to store the dense matrix and provides an unbiased Hθv estimator with a small sample size. In order to conduct the 2nd order optimization for variational inference, if the computation of the gradient for variational lower bound is completed, we only need to add one extra step for gradient evaluation via Eq. (6) which has the same computational complexity as Eq. (4). This leads to a Hessian-free variational inference method described in Algorithm 1. For the worst case of HF, the conjugate gradient (CG) algorithm requires at most d iterations to terminate, meaning that it requires d evaluations of Hθv product. However, the good news is that CG leads to good convergence after a reasonable number of iterations. In practice we found that it may not necessary to wait CG to converge. In other words, even if we set the maximum iteration K in CG to a small fixed number (e.g., 10 in our experiments, though with thousands of parameters), the performance does not deteriorate. The early stoping strategy may have the similar effect of Wolfe condition to avoid excessive step size in Newton’s method. Therefore we successfully reduce the complexity of each iteration to O(Kdd2 z), whereas O(dd2 z) is for one SGD iteration. L-BFGS Limited memory BFGS utilizes the information gleaned from the gradient vector to approximate the Hessian matrix without explicit computation, and we can readily utilize it within our framework. The basic idea of BFGS approximates Hessian by an iterative algorithm Bt+1 = Bt + ∆Gt∆G⊤ t /∆θt∆θ⊤ t −Bt∆θt∆θ⊤ t Bt/∆θ⊤ t Bt∆θt, where ∆Gt = Gt −Gt−1 and ∆θt = θt −θt−1. By Eq. (4), the gradient Gt at each iteration can be obtained without any difficulty. However, even if this low rank approximation to the Hessian is easy to invert analytically due to the Sherman-Morrison formula, we still need to store the matrix. L-BFGS will further implicitly approximate this dense Bt or B−1 t by tracking only a few gradient vectors and a short history of parameters and therefore has a linear memory requirement. In general, L-BFGS can perform a sequence of inner products with the K most recent ∆θt and ∆Gt, where K is a predefined constant (10 or 15 in our experiments). Due to the space limitations, we omit the details here but none-the-less will present this algorithm in experiments section. 2.4 Estimator Variance The framework of stochastic backpropagation [16, 17, 19, 25] extensively uses the mean of very few samples (often just one) to approximate the expectation. Similarly we approximate the left side of Eq. (4), (5), (6) by sampling few points from the standard normal distribution. However, the magnitude of the variance of such an estimator is not seriously discussed. [25] simply explored the variance quantitatively for separable functions.[19] merely borrowed the variance reduction technique from reinforcement learning by centering the learning signal in expectation and performing variance normalization. Here, we will generalize the treatment of variance to a broader family, Lipschitz continuous function. 4 Theorem 2 (Variance Bound). If f is an L-Lipschitz differentiable function and ϵ ∼N(0, Idz), then E[(f(ϵ) −E[f(ϵ)])2] ≤L2π2 4 . The proof of Theorem 2 (see supplementary) employs the properties of sub-Gaussian distributions and the duplication trick that are commonly used in learning theory. Significantly, the result implies a variance bound independent of the dimensionality of Gaussian variable. Note that from the proof, we can only obtain the E eλ(f(ϵ)−E[f(ϵ)]) ≤eL2λ2π2/8 for λ > 0. Though this result is enough to illustrate the variance independence of dz, we can in fact tighten it to a sharper upper bound by a constant scalar, i.e. eλ2L2/2, thus leading to the result of Theorem 2 with Var(f(ϵ)) ≤L2. If all the results above hold for smooth (twice continuous and differentiable) functions with Lipschitz constant L then it holds for all Lipschitz functions by a standard approximation argument. This means the condition can be relaxed to Lipschitz continuous function. Corollary 3 (Bias Bound). P 1 M PM m=1 f(ϵm) −E[f(ϵ)] ≥t ≤2e−2Mt2 π2L2 . It is also worth mentioning that the significant corollary of Theorem 2 is probabilistic inequality to measure the convergence rate of Monte Carlo approximation in our setting. This tail bound, together with variance bound, provides the theoretical guarantee for stochastic backpropagation on Gaussian variables and provides an explanation for why a unique realization (M = 1) is enough in practice. By reparametrization, Eq. (4), (5, (6) can be formulated as the expectation w.r.t the isotropic Gaussian distribution with identity covariance matrix leading to Algorithm 1. Thus we can rein in the number of samples for Monte Carlo integration regardless dimensionality of latent variables z. This seems counter-intuitive. However, we notice that larger L may require more samples, and Lipschitz constants of different models vary greatly. 3 Application on Variational Auto-encoder Note that our method is model free. If the loss function has the form of the expectation of a function w.r.t latent Gaussian variables, we can directly use Algorithm 1. In this section, we put the emphasis on a standard framework VAE model [17] that has been intensively researched; in particular, the function endows the logarithm form, thus bridging the gap between Hessian and fisher information matrix by expectation (see a survey [22] and reference therein). 3.1 Model Description Suppose we have N i.i.d. observations X = {x(i)}N i=1, where x(i) ∈RD is a data vector that can take either continuous or discrete values. In contrast to a standard auto-encoder model constructed by a neural network with a bottleneck structure, VAE describes the embedding process from the prospective of a Gaussian latent variable model. Specifically, each data point x follows a generative model pψ(x|z), where this process is actually a decoder that is usually constructed by a non-linear transformation with unknown parameters ψ and a prior distribution pψ(z). The encoder or recognition model qφ(z|x) is used to approximate the true posterior pψ(z|x), where φ is similar to the parameter of variational distribution. As suggested in [16, 17, 25], multi-layered perceptron (MLP) is commonly considered as both the probabilistic encoder and decoder. We will later see that this construction is equivalent to a variant deep neural networks under the constrain of unique realization for z. For this model and each datapoint, the variational lower bound on the marginal likelihood is, log pψ(x(i)) ≥ Eqφ(z|x(i))[log pψ(x(i)|z)] −DKL(qφ(z|x(i))∥pψ(z)) = L(x(i)). (7) We can actually write the KL divergence into the expectation term and denote (ψ, φ) as θ. By the previous discussion, this means that our objective is to solve the optimization problem arg maxθ P i L(x(i)) of full dataset variational lower bound. Thus L-BFGS or HF SGVI algorithm can be implemented straightforwardly to estimate the parameters of both generative and recognition models. Since the first term of reconstruction error appears in Eq. (7) with an expectation form on latent variable, [17, 25] used a small finite number M samples as Monte Carlo integration with reparameterization trick to reduce the variance. This is, in fact, drawing samples from the standard normal distribution. In addition, the second term is the KL divergence between the variational distribution and the prior distribution, which acts as a regularizer. 5 3.2 Deep Neural Networks with Hybrid Hidden Layers In the experiments, setting M = 1 can not only achieve excellent performance but also speed up the program. In this special case, we discuss the relationship between VAE and traditional deep auto-encoder. For binary inputs, denote the output as y, we have log pψ(x|z) = PD j=1 xj log yj + (1−xj) log(1−yj), which is exactly the negative cross-entropy. It is also apparent that log pψ(x|z) is equivalent to negative squared error loss for continuous data. This means that maximizing the lower bound is roughly equal to minimizing the loss function of a deep neural network (see Figure 1 in supplementary), except for different regularizers. In other words, the prior in VAE only imposes a regularizer in encoder or generative model, while L2 penalty for all parameters is always considered in deep neural nets. From the perspective of deep neural networks with hybrid hidden nodes, the model consists of two Bernoulli layers and one Gaussian layer. The gradient computation can simply follow a variant of backpropagation layer by layer (derivation given in supplementary). To further see the rationale of setting M = 1, we will investigate the upper bound of the Lipschitz constant under various activation functions in the next lemma. As Theorem 2 implies, the variance of approximate expectation by finite samples mainly relies on the Lipschitz constant, rather than dimensionality. According to Lemma 4, imposing a prior or regularization to the parameter can control both the model complexity and function smoothness. Lemma 4 also implies that we can get the upper bound of the Lipschitz constant for the designed estimators in our algorithm. Lemma 4. For a sigmoid activation function g in deep neural networks with one Gaussian layer z, z ∼N(µ, C), C = R⊤R. Let z = µ + Rϵ, then the Lipschitz constant of g(Wi,(µ + Rϵ) + bi) is bounded by 1 4∥Wi,R∥2, where Wi, is ith row of weight matrix and bi is the ith element bias. Similarly, for hyperbolic tangent or softplus function, the Lipschitz constant is bounded by ∥Wi,R∥2. 4 Experiments We apply our 2nd order stochastic variational inference to two different non-conjugate models. First, we consider a simple but widely used Bayesian logistic regression model, and compare with the most recent 1st order algorithm, doubly stochastic variational inference (DSVI) [28], designed for sparse variable selection with logistic regression. Then, we compare the performance of VAE model with our algorithms. 4.1 Bayesian Logistic Regression Given a dataset {xi, yi}N i=1, where each instance xi ∈RD includes the default feature 1 and yi ∈{−1, 1} is the binary label, the Bayesian logistic regression models the probability of outputs conditional on features and the coefficients β with an imposed prior. The likelihood and the prior can usually take the form as QN i=1 g(yix⊤ i β) and N(0, Λ) respectively, where g is sigmoid function and Λ is a diagonal covariance matrix for simplicity. We can propose a variational Gaussian distribution q(β|µ, C) to approximate the posterior of regression parameter. If we further assume a diagonal C, a factorized form QD j=1 q(βj|µj, σj) is both efficient and practical for inference. Unlike iteratively optimizing Λ and µ, C as in variational EM, [28] noticed that the calculation of the gradient w.r.t the lower bound indicates the updates of Λ can be analytically worked out by variational parameters, thus resulting a new objective function for the representation of lower bound that only relies on µ, C (details refer to [28]). We apply our algorithm to this variational logistic regression on three appropriate datasets: DukeBreast and Leukemia are small size but high-dimensional for sparse logistic regression, and a9a which is large. See Table 1 for additional dataset descriptions. Fig. 1 shows the convergence of Gaussian variational lower bound for Bayesian logistic regression in terms of running time. It is worth mentioning that the lower bound of HFSGVI converges within 3 iterations on the small datasets DukeBreast and Leukemia. This is because all data points are fed to all algorithms and the HFSGVI uses a better approximation of the Hessian matrix to proceed 2nd order optimization. L-BFGS-SGVI also take less time to converge and yield slightly larger lower bound than DSVI. In addition, as an SGD-based algorithm, it is clearly seen that DSVI is less stable for small datasets and fluctuates strongly even at the later optimized stage. For the large a9a, we observe that HFSGVI also needs 1000 iterations to reach a good lower bound and becomes less stable than the other two algorithms. However, L-BFGS-SGVI performs the best 6 Table 1: Comparison on number of misclassification Dataset(size: #train/test/feature) DSVI L-BFGS-SGVI HFSGVI train test train test train test DukeBreast(38/4/7129) 0 2 0 1 0 0 Leukemia(38/34/7129) 0 3 0 3 0 3 A9a(32561/16281/123) 4948 2455 4936 2427 4931 2468 0 20 40 60 80 −250 −200 −150 −100 −50 0 time(s) Lower Bound Duke Breast DSVI L−BFGS−SGVI HFSGVI 0 20 40 60 80 −140 −120 −100 −80 −60 −40 −20 0 time(s) Lower Bound Leukemia DSVI L−BFGS−SGVI HFSGVI 0 100 200 300 400 500 −1.5 −1.4 −1.3 −1.2 −1.1 −1x 10 4 time(s) Lower Bound A9a DSVI L−BFGS−SGVI HFSGVI Figure 1: Convergence rate on logistic regression (zoom out or see larger figures in supplementary) both in terms of convergence rate and the final lower bound. The misclassification report in Table 1 reflects the similar advantages of our approach, indicating a competitive predication ability on various datasets. Finally, it is worth mentioning that all three algorithms learn a set of very sparse regression coefficients on the three datasets (see supplement for additional visualizations). 4.2 Variational Auto-encoder We also apply the 2nd order stochastic variational inference to train a VAE model (setting M = 1 for Monte Carlo integration to estimate expectation) or the equivalent deep neural networks with hybrid hidden layers. The datasets we used are images from the Frey Face, Olivetti Face and MNIST. We mainly learned three tasks by maximizing the variational lower bound: parameter estimation, images reconstruction and images generation. Meanwhile, we compared the convergence rate (running time) of three algorithms, where in this section the compared SGD is the Ada version [6] that is recommended for VAE model in [17, 25]. The experimental setting is as follows. The initial weights are randomly drawn from N(0, 0.012I) or N(0, 0.0012I), while all bias terms are initialized as 0. The variational lower bound only introduces the regularization on the encoder parameters, so we add an L2 regularizer on decoder parameters with a shrinkage parameter 0.001 or 0.0001. The number of hidden nodes for encoder and decoder is the same for all auto-encoder model, which is reasonable and convenient to construct a symmetric structure. The number is always tuned from 200 to 800 with 100 increment. The mini-batch size is 100 for L-BFGS and Ada, while larger mini-batch is recommended for HF, meaning it should vary according to the training size. The detailed results are shown in Fig. 2 and 3. Both Hessian-free and L-BFGS converge faster than Ada in terms of running time. HFSGVI also performs competitively with respet to generalization on testing data. Ada takes at least four times as long to achieve similar lower bound. Theoretically, Newton’s method has a quadratic convergence rate in terms of iteration, but with a cubic algorithmic complexity at each iteration. However, we manage to lower the computation in each iteration to linear complexity. Thus considering the number of evaluated training data points, the 2nd order algorithm needs much fewer step than 1st order gradient descent (see visualization in supplementary on MNIST). The Hessian matrix also replaces manually tuned learning rates, and the affine invariant property allows for automatic learning rate adjustment. Technically, if the program can run in parallel with GPU, the speed advantages of 2nd order algorithm should be more obvious [21]. Fig. 2(b) and Fig. 3(b) are reconstruction results of input images. From the perspective of deep neural network, the only difference is the Gaussian distributed latent variables z. By corollary of Theorem 2, we can roughly tell the mean µ is able to represent the quantity of z, meaning this layer is actually a linear transformation with noise, which looks like dropout training [5]. Specifically, Olivetti includes 64×64 pixels faces of various persons, which means more complicated models or preprocessing [13] (e.g. nearest neighbor interpolation, patch sampling) is needed. However, even when simply learning a very bottlenecked auto-encoder, our approach can achieve acceptable results. Note that although we have tuned the hyperparameters of Ada by cross-validation, the best result is still a bunch of mean faces. For manifold learning, Fig. 2(c) represents how the 7 0 0.5 1 1.5 2 x 10 4 200 400 600 800 1000 1200 1400 1600 time(s) Lower Bound Frey Face Ada train Ada test L−BFGS−SGVI train L−BFGS−SGVI test HFSGVI train HFSGVI test (a) Convergence (b) Reconstruction (c) Manifold by Generative Model Figure 2: (a) shows how lower bound increases w.r.t program running time for different algorithms; (b) illustrates the reconstruction ability of this auto-encoder model when dz = 20 (left 5 columns are randomly sampled from dataset); (c) is the learned manifold of generative model when dz = 2. 0 0.5 1 1.5 2 x 10 4 −4000 −2000 0 2000 4000 6000 time (s) Lower Bound Olivetti Face Ada train Ada test L−BFGS−SGVI train L−BFGS−SGVI test HFSGVI train HFSGVI test (a) Convergence (b) HFSGVI v.s L-BFGS-SGVI v.s. Ada-SGVI Figure 3: (a) shows running time comparison; (b) illustrates reconstruction comparison without patch sampling, where dz = 100: top 5 rows are original faces. learned generative model can simulate the images by HFSGVI. To visualize the results, we choose the 2D latent variable z in pψ(x|z), where the parameter ψ is estimated by the algorithm. The two coordinates of z take values that were transformed through the inverse CDF of the Gaussian distribution from equal distance grid (10×10 or 20×20) on the unit square. Then we merely use the generative model to simulate the images. Besides these learning tasks, denoising, imputation [25] and even generalizing to semi-supervised learning [16] are possible application of our approach. 5 Conclusions and Discussion In this paper we proposed a scalable 2nd order stochastic variational method for generative models with continuous latent variables. By developing Gaussian backpropagation through reparametrization we introduced an efficient unbiased estimator for higher order gradients information. Combining with the efficient technique for computing Hessian-vector multiplication, we derived an efficient inference algorithm (HFSGVI) that allows for joint optimization of all parameters. The algorithmic complexity of each parameter update is quadratic w.r.t the dimension of latent variables for both 1st and 2nd derivatives. Furthermore, the overall computational complexity of our 2nd order SGVI is linear w.r.t the number of parameters in real applications just like SGD or Ada. However, HFSGVI may not behave as fast as Ada in some situations, e.g., when the pixel values of images are sparse due to fast matrix multiplication implementation in most softwares. Future research will focus on some difficult deep models such as RNNs [10, 27] or Dynamic SBN [9]. Because of conditional independent structure by giving sampled latent variables, we may construct blocked Hessian matrix to optimize such dynamic models. Another possible area of future work would be reinforcement learning (RL) [20]. Many RL problems can be reduced to compute gradients of expectations (e.g., in policy gradient methods) and there has been series of exploration in this area for natural gradients. However, we would suggest that it might be interesting to consider where stochastic backpropagation fits in our framework and how 2nd order computations can help. Acknolwedgement This research was supported in part by the Research Grants Council of the Hong Kong Special Administrative Region (Grant No. 614513). 8 References [1] Matthew James Beal. Variational algorithms for approximate Bayesian inference. PhD thesis, 2003. [2] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of Machine Learning Research, 3, 2003. [3] Joseph-Fr´ed´eric Bonnans, Jean Charles Gilbert, Claude Lemar´echal, and Claudia A Sagastiz´abal. Numerical optimization: theoretical and practical aspects. Springer Science & Business Media, 2006. [4] Georges Bonnet. Transformations des signaux al´eatoires a travers les syst`emes non lin´eaires sans m´emoire. Annals of Telecommunications, 19(9):203–220, 1964. [5] George E Dahl, Tara N Sainath, and Geoffrey E Hinton. Improving deep neural networks for lvcsr using rectified linear units and dropout. In ICASSP, 2013. [6] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011. [7] Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. Why does unsupervised pre-training help deep learning? Journal of Machine Learning Research, 11:625–660, 2010. [8] Thomas S Ferguson. Location and scale parameters in exponential families of distributions. Annals of Mathematical Statistics, pages 986–1001, 1962. [9] Zhe Gan, Chunyuan Li, Ricardo Henao, David Carlson, and Lawrence Carin. Deep temporal sigmoid belief networks for sequence modeling. In NIPS, 2015. [10] Karol Gregor, Ivo Danihelka, Alex Graves, and Daan Wierstra. Draw: A recurrent neural network for image generation. In ICML, 2015. [11] James Hensman, Magnus Rattray, and Neil D Lawrence. Fast variational inference in the conjugate exponential family. In NIPS, 2012. [12] Geoffrey E Hinton, Peter Dayan, Brendan J Frey, and Radford M Neal. The ”wake-sleep” algorithm for unsupervised neural networks. Science, 268(5214):1158–1161, 1995. [13] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [14] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1):1303–1347, 2013. [15] Mohammad E Khan. Decoupled variational gaussian inference. In NIPS, 2014. [16] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In NIPS, 2014. [17] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In ICLR, 2014. [18] James Martens. Deep learning via hessian-free optimization. In ICML, 2010. [19] Andriy Mnih and Karol Gregor. Neural variational inference and learning in belief networks. In ICML, 2014. [20] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. [21] Jiquan Ngiam, Adam Coates, Ahbik Lahiri, Bobby Prochnow, Quoc V Le, and Andrew Y Ng. On optimization methods for deep learning. In ICML, 2011. [22] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013. [23] Barak A Pearlmutter. Fast exact multiplication by the hessian. Neural computation, 6(1):147–160, 1994. [24] Robert Price. A useful theorem for nonlinear devices having gaussian inputs. Information Theory, IRE Transactions on, 4(2):69–72, 1958. [25] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, 2014. [26] Tim Salimans. Markov chain monte carlo and variational inference: Bridging the gap. In ICML, 2015. [27] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In NIPS, 2014. [28] Michalis Titsias and Miguel L´azaro-Gredilla. Doubly stochastic variational bayes for non-conjugate inference. In ICML, 2014. 9 | 2015 | 399 |
5,923 | A Fast, Universal Algorithm to Learn Parametric Nonlinear Embeddings Miguel ´A. Carreira-Perpi˜n´an EECS, University of California, Merced http://eecs.ucmerced.edu Max Vladymyrov UC Merced and Yahoo Labs maxv@yahoo-inc.com Abstract Nonlinear embedding algorithms such as stochastic neighbor embedding do dimensionality reduction by optimizing an objective function involving similarities between pairs of input patterns. The result is a low-dimensional projection of each input pattern. A common way to define an out-of-sample mapping is to optimize the objective directly over a parametric mapping of the inputs, such as a neural net. This can be done using the chain rule and a nonlinear optimizer, but is very slow, because the objective involves a quadratic number of terms each dependent on the entire mapping’s parameters. Using the method of auxiliary coordinates, we derive a training algorithm that works by alternating steps that train an auxiliary embedding with steps that train the mapping. This has two advantages: 1) The algorithm is universal in that a specific learning algorithm for any choice of embedding and mapping can be constructed by simply reusing existing algorithms for the embedding and for the mapping. A user can then try possible mappings and embeddings with less effort. 2) The algorithm is fast, and it can reuse N-body methods developed for nonlinear embeddings, yielding linear-time iterations. 1 Introduction Given a high-dimensional dataset YD×N = (y1, . . . , yN) of N points in RD, nonlinear embedding algorithms seek to find low-dimensional projections XL×N = (x1, . . . , xN) with L < D by optimizing an objective function E(X) constructed using an N × N matrix of similarities W = (wnm) between pairs of input patterns (yn, ym). For example, the elastic embedding (EE) [5] optimizes: E(X) = PN n,m=1 wnm ∥xn −xm∥2 + λ PN n,m=1 exp (−∥xn −xm∥2) λ > 0. (1) Here, the first term encourages projecting similar patterns near each other, while the second term repels all pairs of projections. Other algorithms of this type are stochastic neighbor embedding (SNE) [15], t-SNE [27], neighbor retrieval visualizer (NeRV) [28] or the Sammon mapping [23], as well as spectral methods such as metric multidimensional scaling and Laplacian eigenmaps [2] (though our focus is on nonlinear objectives). Nonlinear embeddings can produce visualizations of high-dimensional data that display structure such as manifolds or clustering, and have been used for exploratory purposes and other applications in machine learning and beyond. Optimizing nonlinear embeddings is difficult for three reasons: there are many parameters (NL); the objective is very nonconvex, so gradient descent and other methods require many iterations; and it involves O(N 2) terms, so evaluating the gradient is very slow. Major progress in these problems has been achieved in recent years. For the second problem, the spectral direction [29] is constructed by “bending” the gradient using the curvature of the quadratic part of the objective (for EE, this is the graph Laplacian L of W). This significantly reduces the number of iterations, while evaluating the direction itself is about as costly as evaluating the gradient. For the third problem, N-body methods such as tree methods [1] and fast multipole methods [11] approximate the gradient in O(N log N) 1 and O(N) for small dimensions L, respectively, and have allowed to scale up embeddings to millions of patterns [26, 31, 34]. Another issue that arises with nonlinear embeddings is that they do not define an “out-of-sample” mapping F: RD →RL that can be used to project patterns not in the training set. There are two basic approaches to define an out-of-sample mapping for a given embedding. The first one is a variational argument, originally put forward for Laplacian eigenmaps [6] and also applied to the elastic embedding [5]. The idea is to optimize the embedding objective for a dataset consisting of the N training points and one test point, but keeping the training projections fixed. Essentially, this constructs a nonparametric mapping implicitly defined by the training points Y and its projections X, without introducing any assumptions. The mapping comes out in closed form for Laplacian eigenmaps (a Nadaraya-Watson estimator) but not in general (e.g. for EE), in which case it needs a numerical optimization. In either case, evaluating the mapping for a test point is O(ND), which is slow and does not scale. (For spectral methods one can also use the Nystr¨om formula [3], but it does not apply to nonlinear embeddings, and is still O(ND) at test time.) The second approach is to use a mapping F belonging to a parametric family F of mappings (e.g. linear or neural net), which is fast at test time. Directly fitting F to (Y, X) is inelegant, since F is unrelated to the embedding, and may not work well if the mapping cannot model well the data (e.g. if F is linear). A better way is to involve F in the learning from the beginning, by replacing xn with F(yn) in the embedding objective function and optimizing it over the parameters of F. For example, for the elastic embedding of (1) this means P(F) = PN n,m=1 wnm ∥F(yn) −F(ym)∥2 + λ PN n,m=1 exp −∥F(yn) −F(ym)∥2 . (2) This will give better results because the only embeddings that are allowed are those that are realizable by a mapping F in the family F considered. Hence, the optimal F will exactly match the embedding, which is still trying to optimize the objective E(X). This provides an intermediate solution between the nonparametric mapping described above, which is slow at test time, and the direct fit of a parametric mapping to the embedding, which is suboptimal. We will focus on this approach, which we call parametric embedding (PE), following previous work [25]. A long history of PEs exists, using unsupervised [14, 16–18, 24, 25, 32] or supervised [4, 9, 10, 13, 20, 22] embedding objectives, and using linear or nonlinear mappings (e.g. neural nets). Each of these papers develops a specialized algorithm to learn the particular PE they define (= embedding objective and mapping family). Besides, PEs have also been used as regularization terms in semisupervised classification, regression or deep learning [33]. Our focus in this paper is on optimizing an unsupervised parametric embedding defined by a given embedding objective E(X), such as EE or t-SNE, and a given family for the mapping F, such as linear or a neural net. The straightforward approach, used in all papers cited above, is to derive a training algorithm by applying the chain rule to compute gradients over the parameters of F and feeding them to a nonlinear optimizer (usually gradient descent or conjugate gradients). This has three problems. First, a new gradient and optimization algorithm must be developed and coded for each choice of E and F. For a user who wants to try different choices on a given dataset, this is very inconvenient—and the power of nonlinear embeddings and unsupervised methods in general is precisely as exploratory techniques to understand the structure in data, so a user needs to be able to try multiple techniques. Ideally, the user should simply be able to plug different mappings F into any embedding objective E, with minimal development work. Second, computing the gradient involves O(N 2) terms each depending on the entire mapping’s parameters, which is very slow. Third, both E and F must be differentiable for the chain rule to apply. Here, we propose a new approach to optimizing parametric embeddings, based on the recently introduced method of auxiliary coordinates (MAC) [7, 8], that partially alleviates these problems. The idea is to solve an equivalent, constrained problem by introducing new variables (the auxiliary coordinates). Alternating optimization over the coordinates and the mapping’s parameters results in a step that trains an auxiliary embedding with a “regularization” term, and a step that trains the mapping by solving a regression, both of which can be solved by existing algorithms. Section 2 introduces important concepts and describes the chain-rule based optimization of parametric embeddings, section 3 applies MAC to parametric embeddings, and section 4 shows with different combinations of embeddings and mappings that the resulting algorithm is very easy to construct, including use of N-body methods, and is faster than the chain-rule based optimization. 2 Embedding space Free embedding Direct fit PE Z1 Z2 Z3 X∗ F1(Y) F2(Y) F∗(Y) F′(Y) RL×N Figure 1: Left: illustration of the feasible set {Z ∈RL×N: Z = F(Y) for F ∈F} (grayed areas) of embeddings that can be produced by the mapping family F. This corresponds to the feasible set of the equality constraints in the MAC-constrained problem (4). A parametric embedding Z∗= F∗(Y) is a feasible embedding with locally minimal value of E. A free embedding X∗is a minimizer of E and is usually not feasible. A direct fit F′ (to the free embedding X∗) is feasible but usually not optimal. Right 3 panels: 2D embeddings of 3 objects from the COIL-20 dataset using a linear mapping: a free embedding, its direct fit, and the parametric embedding (PE) optimized with MAC. 2 Free embeddings, parametric embeddings and chain-rule gradients Consider a given nonlinear embedding objective function E(X) that takes an argument X ∈RL×N and maps it to a real value. E(X) is constructed for a dataset Y ∈RD×N according to a particular embedding model. We will use as running example the equations (1), (2) for the elastic embedding, which are simpler than for most other embeddings. We call free embedding X∗the result of optimizing E, i.e., a (local) optimizer of E. A parametric embedding (PE) objective function for E using a family F of mappings F: RD →RL (for example, linear mappings), is defined as P(F) = E(F(Y)), where F(Y) = (F(y1), . . . , F(yN)), as in eq. (2) for EE. Note that, to simplify the notation, we do not write explicitly the parameters of F. Thus, a specific PE can be defined by any combination of embedding objective function E (EE, SNE...) and parametric mapping family F (linear, neural net...). The result of optimizing P, i.e., a (local) optimizer of P, is a mapping F∗ which we can apply to any input y ∈RD, not necessarily from among the training patterns. Finally, we call direct fit the mapping resulting from fitting F to (Y, X∗) by least-squares regression, i.e., to map the input patterns to a free embedding. We have the following results. Theorem 2.1. Let X∗be a global minimizer of E. Then ∀F ∈F: P(F) ≥E(X∗). Proof. P(F) = E(F(Y)) ≥E(X∗). Theorem 2.2 (Perfect direct fit). Let F∗∈F. If F∗(Y) = X∗and X∗is a global minimizer of E then F∗is a global minimizer of P. Proof. Let F ∈F with F ̸= F∗. Then P(F) = E(F(Y)) ≥E(X∗) = E(F∗(Y)) = P(F∗). Theorem 2.2 means that if the direct fit of F∗to (Y, X∗) has zero error, i.e., F∗(Y) = X∗, then it is the solution of the parametric embedding, and there is no need to optimize P. Theorem 2.1 means that a PE cannot do better than a free embedding1. This is obvious in that a PE is not free but constrained to use only embeddings that can be produced by a mapping in F, as illustrated in fig. 1. A PE will typically worsen the free embedding: more powerful mapping families, such as neural nets, will distort the embedding less than more restricted families, such as linear mappings. In this sense, the free embedding can be seen as using as mapping family F a table (Y, X) with parameters X. It represents the most flexible mapping, since every projection xn is a free parameter, but it can only be applied to patterns in the training set Y. We will assume that the direct fit has a positive error, i.e., the direct fit is not perfect, so that optimizing P is necessary. Computationally, the complexity of the gradient of P(F) appears to be O(N 2 |F|), where |F| is the number of parameters in F, because P(F) involves O(N 2) terms, each dependent on all the parameters of F (e.g. for linear F this would cost O(N 2LD)). However, if manually simplified and coded, the gradient can actually be computed in O(N 2L + N |F|). For example, for the elastic embedding with a linear mapping F(y) = Ay where A is of L × D, the gradient of eq. (2) is: ∂P ∂A = 2 PN n,m=1 h wnm −λexp −∥Ayn −Aym∥2 × (Ayn −Aym)(yn −ym)T i (3) 1By a continuity argument, theorem 2.2 carries over to the case where F∗and X∗= F∗(Y) are local minimizers of P and E, respectively. However, theorem 2.2 would apply only locally, that is, P(F) ≥E(X∗) holds locally but there may be mappings F with P(F) < E(X∗) associated with another (lower) local minimizer of E. However, the same intuition remains: we cannot expect a PE to improve over a good free embedding. 3 and this can be computed in O(N 2L+NDL) if we precompute X = AY and take common factors of the summation over xn and xm. An automatic differentiation package may or may not be able to realize these savings in general. The obvious way to optimize P(F) is to compute the gradient wrt the parameters of F by applying the chain rule (since P is a function of E and this is a function of the parameters of F), assuming E and F are differentiable. While perfectly doable in theory, in practice this has several problems. (1) Deriving, debugging and coding the gradient of P for a nonlinear F is cumbersome. One could use automatic differentiation [12], but current packages can result in inefficient, non-simplified gradients in time and memory, and are not in widespread use in machine learning. Also, combining autodiff with N-body methods seems difficult, because the latter require spatial data structures that are effective for points in low dimension (no more than 3 as far as we know) and depend on the actual point values. (2) The PE gradient may not benefit from special-purpose algorithms developed for embeddings. For example, the spectral direction method [29] relies on special properties of the free embedding Hessian which do not apply to the PE Hessian. (3) Given the gradient, one then has to choose and possibly adapt a suitable nonlinear optimization method and set its parameters (line search parameters, etc.) so that convergence is assured and the resulting algorithm is efficient. Simple choices such as gradient descent or conjugate gradients are usually not efficient, and developing a good algorithm is a research problem in itself (as evidenced by the many papers that study specific combinations of embedding objective and parametric mapping). (4) Even having done all this, the resulting algorithm will still be very slow because of the complexity of computing the gradient: O(N 2L + N |F|). It may be possible to approximate the gradient using N-body methods, but again this would involve significant development effort. (5) As noted earlier, the chain rule only applies if both E and F are differentiable. Finally, all of the above needs to be redone if we change the mapping (e.g. from a neural net to a RBF network) or the embedding (e.g. from EE to t-SNE). We now show how these problems can be addressed by using a different approach to the optimization. 3 Optimizing a parametric embedding using auxiliary coordinates The PE objective function, e.g. (2), can be seen as a nested function where we first apply F and then E. A recently proposed strategy, the method of auxiliary coordinates (MAC) [7, 8], can be used to derive optimization algorithms for such nested systems. We write the nested problem min P(F) = E(F(Y)) as the following, equivalent constrained optimization problem: min ¯P(F, Z) = E(Z) s.t. zn = F(yn), n = 1, . . . , N (4) where we have introduced an auxiliary coordinate zn for each input pattern and a corresponding equality constraint. zn can be seen as the output of F (i.e., the low-dimensional projection) for xn. The optimization is now on an augmented space (F, Z) with NL extra parameters Z ∈RL×N, and F ∈F. The feasible set of the equality constraints is shown in fig. 1. We solve the constrained problem (4) using a quadratic-penalty method (it is also possible to use the augmented Lagrangian method), by optimizing the following unconstrained problem and driving µ →∞: min PQ(F, Z; µ) = E(Z) + µ 2 PN n=1 ∥zn −F(yn)∥2 = E(Z) + µ 2 ∥Z −F(Y)∥2 . (5) Under mild assumptions, the minima (Z∗(µ), F∗(µ)) trace a continuous path that converges to a local optimum of ¯P(F, Z) and hence of P(F) [7, 8]. Finally, we optimize PQ using alternating optimization over the coordinates and the mapping. This results in two steps: Over F given Z: minF∈F PN n=1 ∥zn −F(yn)∥2. This is a standard least-squares regression for a dataset (Y, Z) using F, and can be solved using existing, well-developed code for many families of mappings. For example, for a linear mapping F(y) = Ay we solve a linear system A = ZY+ (efficiently done by caching Y+ in the first iteration and doing a matrix multiplication in subsequent iterations); for a deep net, we can use stochastic gradient descent with pretraining, possibly on a GPU; for a regression tree or forest, we can use any tree-growing algorithm; etc. Also, note that if we want to have a regularization term R(F) in the PE objective (e.g. for weight decay, or for model complexity), that term will appear in the F step but not in the Z step. Hence, the training and regularization of the mapping F is confined to the F step, given the inputs Y and current outputs Z. The mapping F “communicates” with the embedding objective precisely through these low-dimensional coordinates Z. 4 Over Z given F: minZ E(Z) + µ 2 ∥Z −F(Y)∥2. This is a regularized embedding, since E(Z) is the original embedding objective function and ∥Z −F(Y)∥2 is a quadratic regularization term on Z, with weight µ 2 , which encourages Z to be close to a given embedding F(Y). We can reuse existing, well-developed code to learn the embedding E(Z) with simple modifications. For example, the gradient has an added term µ(Z −F(Y)); the spectral direction now uses a curvature matrix L + µ 2 I. The embedding “communicates” with the mapping F through the outputs F(Y) (which are constant within the Z step), which gradually force the embedding Z to agree with the output of a member of the family of mappings F. Hence, the intricacies of nonlinear optimization (line search, method parameters, etc.) remain confined within the regression for F and within the embedding for Z, separately from each other. Designing an optimization algorithm for an arbitrary combination of embedding and mapping is simply achieved by alternately calling existing algorithms for the embedding and for the mapping. Although we have introduced a large number of new parameters to optimize over, the NL auxiliary coordinates Z, the cost of a MAC iteration is actually the same (asymptotically) as the cost of computing the PE gradient, i.e., O(N 2L + N |F|), where |F| is the number of parameters in F. In the Z step, the objective function has O(N 2) terms but each term depends only on 2 projections (zn and zm, i.e., 2L parameters), hence it costs O(N 2L). In the F step, the objective function has N terms, each depending on the entire mapping’s parameters, hence it costs O(N |F|). Another advantage of MAC is that, because it does not use chain-rule gradients, it is even possible to use something like a regression tree for F, which is not differentiable, and so the PE objective function is not differentiable either. In MAC, we can use an algorithm to train regression trees within the F step using as data (Y, Z), reducing the constraint error ∥Z −F(Y)∥2 and the PE objective. A final advantage is that we can benefit from recent work done on using N-body methods to reduce the O(N 2) complexity of computing the embedding gradient exactly to O(N log N) (using treebased methods such as the Barnes-Hut algorithm; [26, 34]) or even O(N) (using fast multipole methods; [31]), at a small approximation error. We can reuse such code as is, without any extra work, to approximate the gradient of E(Z) and then add to it the exact gradient of the regularization term ∥Z −F(Y)∥2, which is already linear in N. Hence, each MAC iteration (Z and F steps) runs in linear time on the sample size, and is thus scalable to larger datasets. The problem of optimizing parametric embeddings is closely related to that of learning binary hashing for fast information retrieval using affinity-based loss functions [21]. The only difference is that in binary hashing the mapping F (an L-bit hash function) maps a D-dimensional vector y ∈RD to an L-dimensional binary vector z ∈{0, 1}L. The MAC framework can also be applied, and the resulting algorithm alternates an F step that fits a classifier for each bit of the hash function, and a Z step that optimizes a regularized binary embedding using combinatorial optimization. Schedule of µ, initial Z and the path to a minimizer The MAC algorithm for parametric embeddings introduces no new optimization parameters except for the penalty parameter µ. The convergence theory of quadratic-penalty methods and MAC [7, 8, 19] tells us that convergence to a local optimum is guaranteed if each iteration achieves sufficient decrease (always possible by running enough (Z,F) steps) and if µ →∞. The latter condition ensures the equality constraints are eventually satisfied. Mathematically, the minima (Z∗(µ), F∗(µ)) of PQ as a function of µ ∈[0, ∞) trace a continuous path in the (Z, F) space that ends at a local minimum of the constrained problem (4) and thus of the parametric embedding objective function. Hence, our algorithm belongs to the family of path-following methods, such as quadratic penalty, augmented Lagrangian, homotopy and interior-point methods, widely regarded as effective with nonconvex problems. In practice, one follows that path loosely, i.e., doing fast, inexact steps on Z and F for the current value of µ and then increasing µ. How fast to increase µ does depend on the particular problem; typically, one multiplies µ times a factor of around 2. Increasing µ very slowly will follow the path more closely, but the runtime will increase. Since µ does not appear in the F step, increasing µ is best done within a Z step (i.e., we run several iterations over Z, increase µ, run several iterations over Z, and then do an F step). The starting point of the path is µ →0+. Here, the Z step simply optimizes E(Z) and hence gives us a free embedding (e.g. we just train an elastic embedding model on the dataset). The F step then fits F to (Y, Z) and hence gives us the direct fit (which generally will have a positive error 5 ∥Z −F(Y)∥2, otherwise we stop with an optimal PE). Thus, the beginning of the path is the direct fit to the free embedding. As µ increases, we follow the path (Z∗(µ), F∗(µ)), and as µ →∞, F converges to a minimizer of the PE and Z converges to F(Y). Hence, the “lifetime” of the MAC algorithm over the “time” µ starts with a free embedding and a direct fit which disagree with each other, and progressively reduces the error in the F fit by increasing the error in the Z embedding, until F(Y) and Z agree at an optimal PE. Although it is possible to initialize Z in a different way (e.g. random) and start with a large value of µ, we find this converges to worse local optima than starting from a free embedding with a small µ. Good local optima for the free embedding itself can be found by homotopy methods as well [5]. 4 Experiments Our experiments confirm that MAC finds optima as good as those of the conventional optimization based on chain-rule gradients, but that it is faster (particularly if using N-body methods). We demonstrate this with different embedding objectives (the elastic embedding and t-SNE) and mappings (linear and neural net). We report on a representative subset of experiments. Illustrative example The simple example of fig. 1 shows the different embedding types described in the paper. We use the COIL-20 dataset, containing rotation sequences of 20 physical objects every 5 degrees, each a grayscale image of 128 × 128 pixels, total N = 1 440 points in 16 384 dimensions; thus, each object traces a closed loop in pixel space. We produce 2D embeddings of 3 objects, using the elastic embedding (EE) [5]. The free embedding X∗results from optimizing the EE objective function (1), without any limitations on the low-dimensional projections. It gives the best visualization of the data, but no out-of-sample mapping. We now seek a linear out-of-sample mapping F. The direct fit fits a linear mapping to map the high-dimensional images Y to their 2D projections X∗from the free embedding. The resulting predictions F(Y) give a quite distorted representation of the data, because a linear mapping cannot realize the free embedding X with low error. The parametric embedding (PE) finds the linear mapping F∗that optimizes P(F), which for EE is eq. (2). To optimize the PE, we used MAC (which was faster than gradient descent and conjugate gradients). The resulting PE represents the data worse than the free embedding (since the PE is constrained to produce embeddings that are realizable by a linear mapping), but better than the direct fit, because the PE can search for embeddings that, while being realizable by a linear mapping, produce a lower value of the EE objective function. The details of the optimization are as follows. We preprocess the data using PCA projecting to 15 dimensions (otherwise learning a mapping would be trivial since there are more degrees of freedom than there are points). The free embedding was optimized using the spectral direction [29] until consecutive iterates differed by a relative error less than 10−3. We increased µ from 0.003 to 0.015 with a step of 0.001 (12 µ values) and did 40 iterations for each µ value. The Z step uses the spectral direction, stopping when the relative error is less than 10−2. Cost of the iterations Fig. 2(left) shows, as a function of the number of data points N (using a 3D Swissroll dataset), the time needed to compute the gradient of the PE objective (red curve) and the gradient of the MAC Z and F steps (black and magenta, respectively, as well as their sum in blue). We use t-SNE and a sigmoidal neural net with an architecture 3–100–500–2. We approximate the Z gradient in O(N log N) using the Barnes-Hut method [26, 34]. The log-log plot shows the asymptotically complexity to be quadratic for the PE gradient, but linear for the F step and O(N log N) for the Z step. The PE gradient runs out of memory for large N. Quality of the local optima For the same Swissroll dataset, fig. 2(right) shows, as a function of the number of data points N, the final value of the PE objective function achieved by the chain-rule CG optimization and by MAC, both using the same initialization. There is practically no difference between both optimization algorithms. We sometimes do find they converge to different local optima, as in some of our other experiments. Different embedding objectives and mapping families The goal of this experiment is to show that we can easily derive a convergent, efficient algorithm for various combinations of embeddings and mappings. We consider as embedding objective functions E(X) t-SNE and EE, and as mappings F a neural net and a linear mapping. We apply each combination to learn a parametric embedding for the MNIST dataset, containing N = 60 000 images of handwritten digit images. Training 6 10 1 10 2 10 3 10 4 10 5 10 −4 10 −2 10 0 10 2 PE MAC Z step F step Runtime (seconds) N 10 1 10 2 10 3 10 4 10 5 10 0.2 10 0.4 10 0.6 10 0.8 MAC PE objective function P(F) PE N Figure 2: Runtime per iteration and final PE objective for a 3D Swissroll dataset, using as mapping F a sigmoidal neural net with an architecture 3–100–500–2, for t-SNE. For PE, we give the runtime needed to compute the gradient of the PE objective using CG with chain-rule gradients. For MAC, we give the runtime needed to compute the (Z,F) steps, separately and together. The gradient of the Z step is approximated with an N-body method. Errorbars over 5 randomly generated Swissrolls. −100 −50 0 50 100 −100 −50 0 50 100 0 1 2 3 4 5 6 7 8 9 −100 −50 0 50 100 −100 −50 0 50 100 0 1 2 3 4 5 6 7 8 9 t-SNE neural net embedding 10 2 10 3 10 4 18.5 19 19.5 20 P(F) Runtime (seconds) MAC PE (minibatch) PE (batch) −6 −4 −2 0 2 4 6 −6 −4 −2 0 2 4 6 −4 −2 0 2 4 −3 −2 −1 0 1 2 3 4 5 EE linear embedding 10 2 10 3 6 7 8 9 10 11 12 13x 10 4 P(F) Runtime (seconds) MAC PE Figure 3: MNIST dataset. Top: t-SNE with a neural net. Bottom: EE with a linear mapping. Left: initial, free embedding (we show a sample of 5 000 points to avoid clutter). Middle: final parametric embedding. Right: learning curves for MAC and chain-rule optimization. Each marker indicates one iteration. For MAC, the solid markers indicate iterations where µ increased. a nonlinear (free) embedding on a dataset of this size was very slow until the recent introduction of N-body methods for t-SNE, EE and other methods [26, 31, 34]. We are the first to use N-body methods for PEs, thanks to the decoupling between mapping and embedding introduced by MAC. For each combination, we derive the MAC algorithm by reusing code available online: for the EE and t-SNE (free) embeddings we use the spectral direction [29]; for the N-body methods to approximate the embedding objective function gradient we use the fast multipole method for EE [31] and the Barnes-Hut method for t-SNE [26, 34]; and for training a deep net we use unsupervised pretraining and backpropagation [22, 25]. Fig. 3(left) shows the free embedding of MNIST obtained with t-SNE and EE after 100 iterations of the spectral direction. To compute the Gaussian affinities between pairs of points, we used entropic affinities with perplexity K = 30 neighbors [15, 30]. The optimization details are as follows. For the neural net, we replicated the setup of [25]. This uses a neural net with an architecture (28 × 28)–500–500–2000–2, initialized with pretraining as de7 scribed in [22] and [25]. For the chain-rule PE optimization we used the code from [25]. Because of memory limitations, [25] actually solved an approximate version of the PE objective function, where rather than using all N 2 pairwise point interactions, only BN interactions are used, corresponding to using minibatches of B = 5 000 points. Therefore, the solution obtained is not a minimizer of the PE objective, as can be seen from the higher objective value in fig. 3(bottom). However, we did also solve the exact objective by using B = N (i.e., one minibatch containing the entire dataset). Each minibatch was trained with 3 CG iterations and a total of 30 epochs. For MAC, we used µ ∈{10−7, 5·10−7, 10−6, 5·10−6, 10−5, 5·10−5}, optimizing until the objective function decrease (before the Z step and after the F step) was less than a relative error of 10−3. The rest of the optimization details concern the embedding and neural net, and are based on existing code. The initialization for Z is the free embedding. The Z step (like the free embedding) uses the spectral direction with a fixed step size γ = 0.05, using 10 iterations of linear conjugate gradients to solve the linear system (L + µ 2 I)P = −G, and using warm-start (i.e., initialized from the the previous iteration’s direction). The gradient G of the free embedding is approximated in O(N log N) using the Barnes-Hut method with accuracy θ = 1.5. Altogether one Z iteration took around 5 seconds. We exit the Z step when the relative error between consecutive embeddings is less than 10−3. For the F step we used stochastic gradient descent with minibatches of 100 points, step size 10−3 and momentum rate 0.9, and trained for 5 epochs for the first 3 values of µ and for 3 epochs for the rest. For the linear mapping F(y) = Ay, we implemented our own chain-rule PE optimizer with gradient descent and backtracking line search for 30 iterations. In MAC, we used 10 µ values spaced logarithmically from 10−2 to 102, optimizing at each µ value until the objective function decrease was less than a relative error of 10−4. Both the Z step and the free embedding use the spectral direction with a fixed step size γ = 0.01. We stop optimizing them when the relative error between consecutive embeddings is less than 10−4. The gradient is approximated using fast multipole methods with accuracy p = 6 (the number of terms in the truncated series). In the F step, the linear system to find A was solved using 10 iterations of linear conjugate gradients with warm start. Fig. 3 shows the final parametric embeddings for MAC, neural-net t-SNE (top) and linear EE (bottom), and the learning curves (PE error P(F(Y)) over iterations). MAC is considerably faster than the chain-rule optimization in all cases. For the neural-net t-SNE, MAC is almost 5× faster than using minibatch (the approximate PE objective) and 20× faster than the exact, batch mode. This is partly thanks to the use of N-body methods in the Z step. The runtimes were (excluding the time taken by pretraining, 40’): MAC: 42’; PE (minibatch): 3.36 h; PE (batch): 15 h; free embedding: 63”. Without using N-body methods, MAC is 4× faster than PE (batch) and comparable to PE (minibatch). For the linear EE, the runtimes were: MAC: 12.7’; PE: 63’; direct fit: 40”. The neural-net t-SNE embedding preserves the overall structure of the free t-SNE embedding but both embeddings do differ. For example, the free embedding creates small clumps of points and the neural net, being a continuous mapping, tends to smooth them out. The linear EE embedding distorts the free EE embedding considerably more than if using a neural net. This is because a linear mapping has a much harder time at approximating the complex mapping from the high-dimensional data into 2D that the free embedding implicitly demands. 5 Conclusion In our view, the main advantage of using the method of auxiliary coordinates (MAC) to learn parametric embeddings is that it simplifies the algorithm development. One only needs to plug in existing code for the embedding (with minor modifications) and the mapping. This is particularly useful to benefit from complex, highly optimized code for specific problems, such as the N-body methods we used here, or perhaps GPU implementations of deep nets and other machine learning models. In many applications, the efficiency in programming an easy, robust solution is more valuable than the speed of the machine. But, in addition, we find that the MAC algorithm can be quite faster than the chain-rule based optimization of the parametric embedding. Acknowledgments Work funded by NSF award IIS–1423515. We thank Weiran Wang for help with training the deep net in the MNIST experiment. 8 References [1] J. Barnes and P. Hut. A hierarchical O(N log N) force-calculation algorithm. Nature, 324, 1986. [2] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373–1396, 2003. [3] Y. Bengio, O. Delalleau, N. Le Roux, J.-F. Paiement, P. Vincent, and M. Ouimet. Learning eigenfunctions links spectral embedding and kernel PCA. Neural Computation, 16:2197–2219, 2004. [4] J. Bromley, J. W. Bentz, L. Bottou, I. Guyon, Y. LeCun, C. Moore, E. S¨ackinger, and R. Shah. Signature verification using a “siamese” time delay neural network. Int. J. Pattern Recognition and Artificial Intelligence, 5:669–688, 1993. [5] M. Carreira-Perpi˜n´an. The elastic embedding algorithm for dimensionality reduction. ICML, 2010. [6] M. Carreira-Perpi˜n´an and Z. Lu. The Laplacian Eigenmaps Latent Variable Model. AISTATS, 2007. [7] M. Carreira-Perpi˜n´an and W. Wang. Distributed optimization of deeply nested systems. arXiv:1212.5921 [cs.LG], Dec. 24 2012. [8] M. Carreira-Perpi˜n´an and W. Wang. Distributed optimization of deeply nested systems. AISTATS, 2014. [9] A. Globerson and S. Roweis. Metric learning by collapsing classes. NIPS, 2006. [10] J. Goldberger, S. Roweis, G. Hinton, and R. Salakhutdinov. Neighbourhood components analysis. NIPS, 2005. [11] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comp. Phys., 73, 1987. [12] A. Griewank and A. Walther. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation. SIAM Publ., second edition, 2008. [13] R. Hadsell, S. Chopra, and Y. LeCun. Dimensionality reduction by learning an invariant mapping. CVPR, 2006. [14] X. He and P. Niyogi. Locality preserving projections. NIPS, 2004. [15] G. Hinton and S. T. Roweis. Stochastic neighbor embedding. NIPS, 2003. [16] D. Lowe and M. E. Tipping. Feed-forward neural networks and topographic mappings for exploratory data analysis. Neural Computing & Applications, 4:83–95, 1996. [17] J. Mao and A. K. Jain. Artificial neural networks for feature extraction and multivariate data projection. IEEE Trans. Neural Networks, 6:296–317, 1995. [18] R. Min, Z. Yuan, L. van der Maaten, A. Bonner, and Z. Zhang. Deep supervised t-distributed embedding. ICML, 2010. [19] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, second edition, 2006. [20] J. Peltonen and S. Kaski. Discriminative components of data. IEEE Trans. Neural Networks, 16, 2005. [21] R. Raziperchikolaei and M. Carreira-Perpi˜n´an. Learning hashing with affinity-based loss functions using auxiliary coordinates. arXiv:1501.05352 [cs.LG], Jan. 21 2015. [22] R. Salakhutdinov and G. Hinton. Learning a nonlinear embedding by preserving class neighbourhood structure. AISTATS, 2007. [23] J. W. Sammon, Jr. A nonlinear mapping for data structure analysis. IEEE Trans. Computers, 18, 1969. [24] Y. W. Teh and S. Roweis. Automatic alignment of local representations. NIPS, 2003. [25] L. J. P. van der Maaten. Learning a parametric embedding by preserving local structure. AISTATS, 2009. [26] L. J. P. van der Maaten. Barnes-Hut-SNE. Int. Conf. Learning Representations (ICLR), 2013. [27] L. J. P. van der Maaten and G. E. Hinton. Visualizing data using t-SNE. JMLR, 9:2579–2605, 2008. [28] J. Venna, J. Peltonen, K. Nybo, H. Aidos, and S. Kaski. Information retrieval perspective to nonlinear dimensionality reduction for data visualization. JMLR, 11:451–490, 2010. [29] M. Vladymyrov and M. Carreira-Perpi˜n´an. Partial-Hessian strategies for fast learning of nonlinear embeddings. ICML, 2012. [30] M. Vladymyrov and M. Carreira-Perpi˜n´an. Entropic affinities: Properties and efficient numerical computation. ICML, 2013. [31] M. Vladymyrov and M. Carreira-Perpi˜n´an. Linear-time training of nonlinear low-dimensional embeddings. AISTATS, 2014. [32] A. R. Webb. Multidimensional scaling by iterative majorization using radial basis functions. Pattern Recognition, 28:753–759, 1995. [33] J. Weston, F. Ratle, and R. Collobert. Deep learning via semi-supervised embedding. ICML, 2008. [34] Z. Yang, J. Peltonen, and S. Kaski. Scalable optimization for neighbor embedding for visualization. ICML, 2013. 9 | 2015 | 4 |
5,924 | Learning with Incremental Iterative Regularization Lorenzo Rosasco DIBRIS, Univ. Genova, ITALY LCSL, IIT & MIT, USA lrosasco@mit.edu Silvia Villa LCSL, IIT & MIT, USA Silvia.Villa@iit.it Abstract Within a statistical learning setting, we propose and study an iterative regularization algorithm for least squares defined by an incremental gradient method. In particular, we show that, if all other parameters are fixed a priori, the number of passes over the data (epochs) acts as a regularization parameter, and prove strong universal consistency, i.e. almost sure convergence of the risk, as well as sharp finite sample bounds for the iterates. Our results are a step towards understanding the effect of multiple epochs in stochastic gradient techniques in machine learning and rely on integrating statistical and optimization results. 1 Introduction Machine learning applications often require efficient statistical procedures to process potentially massive amount of high dimensional data. Motivated by such applications, the broad objective of our study is designing learning procedures with optimal statistical properties, and, at the same time, computational complexities proportional to the generalization properties allowed by the data, rather than their raw amount [6]. We focus on iterative regularization as a viable approach towards this goal. The key observation behind these techniques is that iterative optimization schemes applied to scattered, noisy data exhibit a self-regularizing property, in the sense that early termination (earlystop) of the iterative process has a regularizing effect [21, 24]. Indeed, iterative regularization algorithms are classical in inverse problems [15], and have been recently considered in machine learning [36, 34, 3, 5, 9, 26], where they have been proved to achieve optimal learning bounds, matching those of variational regularization schemes such as Tikhonov [8, 31]. In this paper, we consider an iterative regularization algorithm for the square loss, based on a recursive procedure processing one training set point at each iteration. Methods of the latter form, often broadly referred to as online learning algorithms, have become standard in the processing of large data-sets, because of their low iteration cost and good practical performance. Theoretical studies for this class of algorithms have been developed within different frameworks. In composite and stochastic optimization [19, 20, 29], in online learning, a.k.a. sequential prediction [11], and finally, in statistical learning [10]. The latter is the setting of interest in this paper, where we aim at developing an analysis keeping into account simultaneously both statistical and computational aspects. To place our contribution in context, it is useful to emphasize the role of regularization and different ways in which it can be incorporated in online learning algorithms. The key idea of regularization is that controlling the complexity of a solution can help avoiding overfitting and ensure stability and generalization [33]. Classically, regularization is achieved penalizing the objective function with some suitable functional, or minimizing the risk on a restricted space of possible solutions [33]. Model selection is then performed to determine the amount of regularization suitable for the data at hand. More recently, there has been an interest in alternative, possibly more efficient, ways to incorporate regularization. We mention in particular [1, 35, 32] where there is no explicit regularization by penalization, and the step-size of an iterative procedure is shown to act as a regularization parameter. Here, for each fixed step-size, each data point is processed once, but multiple passes are typically needed to perform model selection (that is, to pick the best step-size). We also mention 1 [22] where an interesting adaptive approach is proposed, which seemingly avoid model selection under certain assumptions. In this paper, we consider a different regularization strategy, widely used in practice. Namely, we consider no explicit penalization, fix the step size a priori, and analyze the effect of the number of passes over the data, which becomes the only free parameter to avoid overfitting, i.e. regularize. The associated regularization strategy, that we dub incremental iterative regularization, is hence based on early stopping. The latter is a well known ”trick”, for example in training large neural networks [18], and is known to perform very well in practice [16]. Interestingly, early stopping with the square loss has been shown to be related to boosting [7], see also [2, 17, 36]. Our goal here is to provide a theoretical understanding of the generalization property of the above heuristic for incremental/online techniques. Towards this end, we analyze the behavior of both the excess risk and the iterates themselves. For the latter we obtain sharp finite sample bounds matching those for Tikhonov regularization in the same setting. Universal consistency and finite sample bounds for the excess risk can then be easily derived, albeit possibly suboptimal. Our results are developed in a capacity independent setting [12, 30], that is under no conditions on the covering or entropy numbers [30]. In this sense our analysis is worst case and dimension free. To the best of our knowledge the analysis in the paper is the first theoretical study of regularization by early stopping in incremental/online algorithms, and thus a first step towards understanding the effect of multiple passes of stochastic gradient for risk minimization. The rest of the paper is organized as follows. In Section 2 we describe the setting and the main assumptions, and in Section 3 we state the main results, discuss them and provide the main elements of the proof, which is deferred to the supplementary material. In Section 4 we present some experimental results on real and synthetic datasets. Notation We denote by R+ = [0, +1[ , R++ = ]0, +1[ , and N⇤= N \{0}. Given a normed space B and linear operators (Ai)1im, Ai : B ! B for every i, their composition Am ◦· · · ◦A1 will be denoted as Qm i=1 Ai. By convention, if j > m, we set Qm i=j Ai = I, where I is the identity of B. The operator norm will be denoted by k · k and the Hilbert-Schmidt norm by k · kHS. Also, if j > m, we set Pm i=j Ai = 0. 2 Setting and Assumptions We first describe the setting we consider, and then introduce and discuss the main assumptions that will hold throughout the paper. We build on ideas proposed in [13, 27] and further developed in a series of follow up works [8, 3, 28, 9]. Unlike these papers where a reproducing kernel Hilbert space (RKHS) setting is considered, here we consider a formulation within an abstract Hilbert space. As discussed in the Appendix A, results in a RKHS can be recovered as a special case. The formulation we consider is close to the setting of functional regression [25] and reduces to standard linear regression if H is finite dimensional, see Appendix A. Let H be a separable Hilbert space with inner product and norm denoted by h·, ·iH and k·kH. Let (X, Y ) be a pair of random variables on a probability space (⌦, S, P), with values in H and R, respectively. Denote by ⇢the distribution of (X, Y ), by ⇢X the marginal measure on H, and by ⇢(·|x) the conditional measure on R given x 2 H. Considering the square loss function, the problem under study is the minimizazion of the risk, inf w2H E(w), E(w) = Z H⇥R (hw, xiH −y)2d⇢(x, y) , (1) provided the distribution ⇢ is fixed but known only through a training set z = {(x1, y1), . . . , (xn, yn)}, that is a realization of n 2 N⇤independent identical copies of (X, Y ). In the following, we measure the quality of an approximate solution ˆw 2 H (an estimator) considering the excess risk E( ˆw) −inf H E. (2) If the set of solutions of Problem (1) is non empty, that is O = argminH E 6= ?, we also consider $$ ˆw −w†$$ H , where w† = argmin w2O kwkH. (3) 2 More precisely we are interested in deriving almost sure convergence results and finite sample bounds on the above error measures. This requires making some assumptions that we discuss next. We make throughout the following basic assumption. Assumption 1. There exist M 2 ]0, +1[ and 2 ]0, +1[ such that |y| M ⇢-almost surely, and kxk2 H ⇢X-almost surely. The above assumption is fairly standard. The boundness assumption on the output is satisfied in classification, see Appendix A, and can be easily relaxed, see e.g. [8]. The boundness assumption on the input can also be relaxed, but the resulting analysis is more involved. We omit these developments for the sake of clarity. It is well known that (see e.g. [14]), under Assumption 1, the risk is a convex and continuous functional on L2(H, ⇢X), the space of square-integrable functions with norm kfk2 ⇢= R H⇥R |f(x)|2d⇢X(x). The minimizer of the risk on L2(H, ⇢X) is the regression function f⇢(x) = R yd⇢(y|x) for ⇢X-almost every x 2 H. By considering Problem (1) we are restricting the search for a solution to linear functions. Note that, since H is in general infinite dimensional, the minimum in (1) might not be achieved. Indeed, bounds on the error measures in (2) and (3) depend on if, and how well, the regression function can be linearly approximated. The following assumption quantifies in a precise way such a requirement. Assumption 2. Consider the space L⇢= {f : H ! R | 9w 2 H with f(x) = hw, xi ⇢X- a.s.}, and let L⇢be its closure in L2(H, ⇢X). Moreover, consider the operator L : L2(H, ⇢X) ! L2(H, ⇢X), Lf(x) = Z H hx, x0i f(x0)d⇢(x0), 8f 2 L2(H, ⇢X). (4) Define g⇢= argming2L⇢kf⇢−gk⇢. Let r 2 [0, +1[, and assume that (9g 2 L2(H, ⇢X)) such that g⇢= Lrg. (5) The above assumption is standard in the context of RKHS [8]. Since its statement is somewhat technical, and we provide a formulation in a Hilbert space with respect to the usual RKHS setting, we further comment on its interpretation. We begin noting that L⇢is the space of linear functions indexed by H and is a proper subspace of L2(H, ⇢X) – if Assumption 1 holds. Moreover, under the same assumption, it is easy to see that the operator L is linear, self-adjoint, positive definite and trace class, hence compact, so that its fractional power in (4) is well defined. Most importantly, the following equality, which is analogous to Mercer’s theorem [30], can be shown fairly easily: L⇢= L1/2 & L2(H, ⇢X) ' . (6) This last observation allows to provide an interpretation of Condition (5). Indeed, given (6), for r = 1/2, Condition (5) states that g⇢belongs to L⇢, rather than its closure. In this case, Problem 1 has at least one solution, and the set O in (3) is not empty. Vice versa, if O 6= ? then g⇢2 L⇢, and w† is well-defined. If r > 1/2 the condition is stronger than for r = 1/2, for the subspaces of Lr(L2(H, ⇢X)) are nested subspaces of L2(H, ⇢X) for increasing r1. 2.1 Iterative Incremental Regularized Learning The learning algorithm we consider is defined by the following iteration. Let ˆw0 2 H and γ 2 R++. Consider the sequence ( ˆwt)t2N generated through the following procedure: given t 2 N, define ˆwt+1 = ˆun t , (7) where ˆun t is obtained at the end of one cycle, namely as the last step of the recursion ˆu0 t = ˆwt; ˆui t = ˆui−1 t −γ n(hˆui−1 t , xiiH −yi)xi, i = 1, . . . , n. (8) 1If r < 1/2 then the regression function does not have a best linear approximation since g⇢/2 L⇢, and in particular, for r = 0 we are making no assumption. Intuitively, for 0 < r < 1/2, the condition quantifies how far g⇢is from L⇢, that is to be well approximated by a linear function. 3 Each cycle, called an epoch, corresponds to one pass over data. The above iteration can be seen as the incremental gradient method [4, 19] for the minimization of the empirical risk corresponding to z, that is the functional, ˆE(w) = 1 n n X i=1 (hw, xiiH −yi)2. (9) (see also Section B.2). Indeed, there is a vast literature on how the iterations (7), (8) can be used to minimize the empirical risk [4, 19]. Unlike these studies in this paper we are interested in how the iterations (7), (8) can be used to approximately minimize the risk E. The key idea is that while ˆwt is close to a minimizer of the empirical risk when t is sufficiently large, a good approximate solution of Problem (1) can be found by terminating the iterations earlier (early stopping). The analysis in the next few sections grounds theoretically this latter intuition. Remark 1 (Representer theorem). Let H be a RKHS of functions from X to Y defined by a kernel K : X ⇥X ! R. Let ˆw0 = 0, then the iteration after t epochs of the algorithm in (7)-(8) can be written as ˆwt(·) = Pn k=1(↵t)kKxk, for suitable coefficients ↵t = ((↵t)1, . . . , (↵t)n) 2 Rn, updated as follows: ↵t+1 = cn t c0 t = ↵t, (ci t)k = ( (ci−1 t )k −γ n ⇣Pn j=1 K(xi, xj)(ci−1 t )j −yi ⌘ , k = i (ci−1 t )k, k 6= i 3 Early stopping for incremental iterative regularization In this section, we present and discuss the main results of the paper, together with a sketch of the proof. The complete proofs can be found in Appendix B. We first present convergence results and then finite sample bounds for the quantities in (2) and (3). Theorem 1. In the setting of Section 2, let Assumption 1 hold. Let γ 2 ⇤ 0, −1⇤ . Then the following hold: (i) If we choose a stopping rule t⇤: N⇤! N⇤such that lim n!+1 t⇤(n) = +1 and lim n!+1 t⇤(n)3 log n n = 0 (10) then lim n!+1 E( ˆwt⇤(n)) −inf w2H E(w) = 0 P-almost surely. (11) (ii) Suppose additionally that the set O of minimizers of (1) is nonempty and let w† be defined as in (3). If we choose a stopping rule t⇤: N⇤! N⇤satisfying the conditions in (10) then k ˆwt⇤(n) −w†kH ! 0 P-almost surely. (12) The above result shows that for an a priori fixed step-sized, consistency is achieved computing a suitable number t⇤(n) of iterations of algorithm (7)-(8) given n points. The number of required iterations tends to infinity as the number of available training points increases. Condition (10) can be interpreted as an early stopping rule, since it requires the number of epochs not to grow too fast. In particular, this excludes the choice t⇤(n) = 1 for all n 2 N⇤, namely considering only one pass over the data. In the following remark we show that, if we let γ = γ(n) to depend on the length of one epoch, convergence is recovered also for one pass. Remark 2 (Recovering Stochastic Gradient descent). Algorithm in (7)-(8) for t = 1 is a stochastic gradient descent (one pass over a sequence of i.i.d. data) with stepsize γ/n. Choosing γ(n) = −1n↵, with ↵< 1/5 in Algorithm (7)-(8), we can derive almost sure convergence of E( ˆw1)−infH E as n ! +1 relying on a similar proof to that of Theorem 1. To derive finite sample bounds further assumptions are needed. Indeed, we will see that the behavior of the bias of the estimator depends on the smoothness Assumption 2. We are in position to state our main result, giving a finite sample bound. 4 Theorem 2 (Finite sample bounds in H). In the setting of Section 2, let γ 2 ⇤ 0, −1⇤ for every t 2 N. Suppose that Assumption 2 is satisfied for some r 2 ]1/2, +1[. Then the set O of minimizers of (1) is nonempty, and w† in (3) is well defined. Moreover, the following hold: (i) There exists c 2 ]0, +1[ such that, for every t 2 N⇤, with probability greater than 1 −δ, k ˆwt −w†kH 32 log 16 δ pn ⇣ M−1/2 + 2M 2−1 + 3kgk⇢r−3 2 ⌘ t + ✓r −1 2 γ ◆r−1 2 kgk⇢t 1 2 −r. (13) (ii) For the stopping rule t⇤: N⇤! N⇤: t⇤(n) = ⌃ n 1 2r+1 ⌥ , with probability greater than 1 −δ, k ˆwt⇤(n) −w†kH 2 432 log 16 δ & M−1/2+ 2M 2−1+ 3kgk⇢r−3 2 ' + ✓r −1 2 γ ◆r−1 2 kgk⇢ 3 5 n− r−1 2 2r+1 . (14) The dependence on suggests that a big , which corresponds to a small γ, helps in decreasing the sample error, but increases the approximation error. Next we present the result for the excess risk. We consider only the attainable case, that is the case r > 1/2 in Assumption 2. The case r 1/2 is deferred to Appendix A, since both the proof and the statement are conceptually similar to the attainable case. Theorem 3 (Finite sample bounds for the risk – attainable case). In the setting of Section 2, let Assumptions 1 holds, and let γ 2 ⇤ 0, −1⇤ . Let Assumption 2 be satisfied for some r 2 ]1/2, +1]. Then the following hold: (i) For every t 2 N⇤, with probability greater than 1 −δ, E( ˆwt) −inf H E 2 & 32 log(16/δ) '2 n h M + 2M 2−1/2 + 3rkgk⇢ i2 t2 + 2 ✓r γt ◆2r kgk2 ⇢ (15) (ii) For the stopping rule t⇤: N⇤! N⇤: t⇤(n) = ⌃ n 1 2(1+r) ⌥ , with probability greater than 1 −δ, E( ˆwt⇤(n)) −inf H E " 8 ✓ 32 log 16 δ ◆2 ⇣ M + 2M 2−1/2 + 3rkgk⇢ ⌘2 + 2 ✓r γ ◆2r kgk2 ⇢ # n−r/(r+1) (16) Equations (13) and (15) arise from a form of bias-variance (sample-approximation) decomposition of the error. Choosing the number of epochs that optimize the bounds in (13) and (15) we derive a priori stopping rules and corresponding bounds (14) and (16). Again, these results confirm that the number of epochs acts as a regularization parameter and the best choices following from equations (13) and (15) suggest multiple passes over the data to be beneficial. In both cases, the stopping rule depends on the smoothness parameter r which is typically unknown, and hold-out cross validation is often used in practice. Following [9], it is possible to show that this procedure allows to adaptively achieve the same convergence rate as in (16). 3.1 Discussion In Theorem 2, the obtained bound can be compared to known lower bounds, as well as to previous results for least squares algorithms obtained under Assumption 2. Minimax lower bounds and individual lower bounds [8, 31], suggest that, for r > 1/2, O(n(r−1/2)/(2r+1)) is the optimal capacity-independent bound for the H norm2. In this sense, Theorem 2 provides sharp bounds on the iterates. Bounds can be improved only under stronger assumptions, e.g. on the covering numbers or on the eigenvalues of L [30]. This question is left for future work. The lower bounds for the excess risk [8, 31] are of the form O(n−2r/(2r+1)) and in this case the results in Theorems 3 and 7 are not sharp. Our results can be contrasted with online learning algorithms that use step-size 2In a recent manuscript, it has been proved that this is indeed the minimax lower bound (G. Blanchard, personal communication) 5 as regularization parameter. Optimal capacity independent bounds are obtained in [35], see also [32] and indeed such results can be further improved considering capacity assumptions, see [1] and references therein. Interestingly, our results can also be contrasted with non incremental iterative regularization approaches [36, 34, 3, 5, 9, 26]. Our results show that incremental iterative regularization, with distribution independent step-size, behaves as a batch gradient descent, at least in terms of iterates convergence. Proving advantages of incremental regularization over the batch one is an interesting future research direction. Finally, we note that optimal capacity independent and dependent bounds are known for several least squares algorithms, including Tikhonov regularization, see e.g. [31], and spectral filtering methods [3, 9]. These algorithms are essentially equivalent from a statistical perspective but different from a computational perspective. 3.2 Elements of the proof The proofs of the main results are based on a suitable decomposition of the error to be estimated as the sum of two quantities that can be interpreted as a sample and an approximation error, respectively. Bounds on these two terms are then provided. The main technical contribution of the paper is the sample error bound. The difficulty in proving this result is due to multiple passes over the data, which induce statistical dependencies in the iterates. Error decomposition. We consider an auxiliary iteration (wt)t2N which is the expectation of the iterations (7) and (8), starting from w0 2 H with step-size γ 2 R++. More explicitly, the considered iteration generates wt+1 according to wt+1 = un t , (17) where un t is given by u0 t = wt; ui t = ui−1 t −γ n Z H⇥R & hui−1 t , xiH −y ' x d⇢(x, y) . (18) If we let S : H ! L2(H, ⇢X) be the linear map w 7! hw, ·iH, which is bounded by punder Assumption 1, then it is well-known that [13] (8t 2 N) E( ˆwt) −inf H E = kS ˆwt −g⇢k2 ⇢2 kS ˆwt −Swtk2 ⇢+ 2 kSwt −g⇢k2 ⇢ 2k ˆwt −wtk2 H + 2(E(wt) −inf H E). (19) In this paper, we refer to the gap between the empirical and the expected iterates k ˆwt −wtkH as the sample error, and to A(t, γ, n) = E(wt) −infH E as the approximation error. Similarly, if w† (as defined in (3)) exists, using the triangle inequality, we obtain k ˆwt −w†kH k ˆwt −wtkH + kwt −w†kH. (20) Proof main steps. In the setting of Section 2, we summarize the key steps to derive a general bound for the sample error (the proof of the behavior of the approximation error is more standard). The bound on the sample error is derived through many technical lemmas and uses concentration inequalities applied to martingales (the crucial point is the inequality in STEP 5 below). Its complete derivation is reported in Appendix B.2. We introduce the additional linear operators: T : H ! H: T = S⇤S, and, for every x 2 X, Sx : H ! R: Sxw = hw, xi, and Tx : H ! H: Tx = SxS⇤ x. Moreover, set ˆT = Pn i=1 Txi/n. We are now ready to state the main steps of the proof. Sample error bound (STEP 1 to 5) STEP 1 (see Proposition 1): Find equivalent formulations for the sequences ˆwt and wt: ˆwt+1 = (I −γ ˆT) ˆwt + γ ✓1 n n X j=1 S⇤ xjyj ◆ + γ2 ⇣ ˆA ˆwt −ˆb ⌘ wt+1 = (I −γT) wt + γS⇤g⇢+ γ2(Awt −b), 6 where ˆA = 1 n2 n X k=2 " n Y i=k+1 ⇣ I −γ nTxi ⌘# Txk k−1 X j=1 Txj, ˆb = 1 n2 n X k=2 " n Y i=k+1 ⇣ I −γ nTxi ⌘# Txk k−1 X j=1 S⇤ xjyj. A = 1 n2 n X k=2 " n Y i=k+1 ⇣ I −γ nT ⌘# T k−1 X j=1 T, b = 1 n2 n X k=2 " n Y i=k+1 ⇣ I −γ nT ⌘# T k−1 X j=1 S⇤g⇢. STEP 2 (see Lemma 5): Use the formulation obtained in STEP 1 to derive the following recursive inequality, ˆwt −wt = ⇣ I −γ ˆT + γ2 ˆA ⌘t ( ˆw0 −w0) + γ t−1 X k=0 ⇣ I −γ ˆT + γ ˆA ⌘t−k+1 ⇣k with ⇣k = (T −ˆT)wk + γ( ˆA −A)wk + & 1 n Pn i=1 ˆS⇤ xiyi −S⇤g⇢ ' + γ(b −ˆb). STEP 3 (see Lemmas 6 and 7): Initialize ˆw0 = w0 = 0, prove that kI −γ ˆT +γ ˆAk 1, and derive from STEP 2 that, k ˆwt −wtkH γ & kT −ˆTk + γk ˆA −Ak ' t−1 X k=0 kwkkH + γt ⇣$$ 1 n n X i=1 ˆS⇤ xiyi −S⇤g⇢ $$ + γkb −ˆbk ⌘ . STEP 4 (see Lemma 8): Let Assumption 2 hold for some r 2 R+ and g 2 L2(H, ⇢X). Prove that (8t 2 H) kwtkH ⇢max{r−1/2, (γt)1/2−r}kgk⇢ if r 2 [0, 1/2[, r−1/2kgk⇢ if r 2 [1/2, +1[ STEP 5 (see Lemma 9 and Proposition 2: Prove that with probability greater than 1 −δ the following inequalities hold: k ˆA −AkHS 322 3pn log 4 δ , kˆb −bkH 32M 2 3pn log 4 δ , $$$ ˆT −T $$$ HS 16 3pn log 2 δ , $$$ 1 n n X i=1 S⇤ xiyi −S⇤g⇢ $$$ H 16pM 3pn log 2 δ . STEP 6 (approximation error bound, see Theorem 6): Prove that, if Assumption 2 holds for some r 2 ]0, +1[, then E(wt) −infH E & r/γt '2rkgk2 ⇢. Moreover, if Assumption 2 holds with r = 1/2, then kwt −w†kH ! 0, and if Assumption 2 holds for some r 2 ]1/2, +1[, then kwt −w†kH & r−1/2 γt 'r−1/2kgk⇢. STEP 7: Plug the sample and approximation error bounds obtained in STEP 1-5 and STEP 6 in (19) and (20), respectively. 4 Experiments Synthetic data. We consider a scalar linear regression problem with random design. The input points (xi)1in are uniformly distributed in [0, 1] and the output points are obtained as yi = hw⇤, Φ(xi)i + Ni, where Ni is a Gaussian noise with zero mean and standard deviation 1 and Φ = ('k)1kd is a dictionary of functions whose k-th element is 'k(x) = cos((k−1)x)+sin((k−1)x). In Figure 1, we plot the test error for d = 5 (with n = 80 in (a) and 800 in (b)). The plots show that the number of the epochs acts as a regularization parameter, and that early stopping is beneficial to achieve a better test error. Moreover, according to the theory, the experiments suggest that the number of performed epochs increases if the number of available training points increases. Real data. We tested the kernelized version of our algorithm (see Remark 1 and Appendix A) on the cpuSmall3, Adult and Breast Cancer Wisconsin (Diagnostic)4 real-world 3Available at http://www.cs.toronto.edu/˜delve/data/comp-activ/desc.html 4Adult and Breast Cancer Wisconsin (Diagnostic), UCI repository, 2013. 7 Iterations 0 2000 4000 6000 8000 Test error 0.8 1 1.2 (a) Iterations ×105 0 1 2 3 4 Test error 1 1.5 2 (b) Figure 1: Test error as a function of the number of iterations. In (a), n = 80, and total number of iterations of IIR is 8000, corresponding to 100 epochs. In (b), n = 800 and the total number of epochs is 400. The best test error is obtained for 9 epochs in (a) and for 31 epochs in (b). datasets. We considered a subset of Adult, with n = 1600. The results are shown in Figure 2. A comparison of the test errors obtained with the kernelized version of the method proposed in this paper (Kernel Incremental Iterative Regularization (KIIR)), Kernel Iterative Regularization (KIR), that is the kernelized version of gradient descent, and Kernel Ridge Regression (KRR) is reported in Table 1. The results show that the test error of KIIR is comparable to that of KIR and KRR. Iterations ×10 6 0 1 2 3 4 Error 0 0.02 0.04 0.06 0.08 0.1 Validation Error Training Error Figure 2: Training (orange) and validation (blue) classification errors obtained by KIIR on the Breast Cancer dataset as a function of the number of iterations. The test error increases after a certain number of iterations, while the training error is “decreasing” with the number of iterations. Table 1: Test error comparison on real datasets. Median values over 5 trials. Dataset ntr d Error Measure KIIR KRR KIR cpuSmall 5243 12 RMSE 5.9125 3.6841 5.4665 Adult 1600 123 Class. Err. 0.167 0.164 0.154 Breast Cancer 400 30 Class. Err. 0.0118 0.0118 0.0237 Acknowledgments This material is based upon work supported by CBMM, funded by NSF STC award CCF-1231216. and by the MIUR FIRB project RBFR12M3AC. S. Villa is member of GNAMPA of the Istituto Nazionale di Alta Matematica (INdAM). References [1] F. Bach and A. Dieuleveut. Non-parametric stochastic approximation with large step sizes. arXiv:1408.0361, 2014. [2] P. Bartlett and M. Traskin. Adaboost is consistent. J. Mach. Learn. Res., 8:2347–2368, 2007. [3] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. J. Complexity, 23(1):52–72, 2007. [4] D. P. Bertsekas. A new class of incremental gradient methods for least squares problems. SIAM J. Optim., 7(4):913–926, 1997. [5] G. Blanchard and N. Kr¨amer. Optimal learning rates for kernel conjugate gradient regression. In Advances in Neural Inf. Proc. Systems (NIPS), pages 226–234, 2010. 8 [6] L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright, editors, Optimization for Machine Learning, pages 351–368. MIT Press, 2011. [7] P. Buhlmann and B. Yu. Boosting with the l2 loss: Regression and classification. J. Amer. Stat. Assoc., 98:324–339, 2003. [8] A. Caponnetto and E. De Vito. Optimal rates for regularized least-squares algorithm. Found. Comput. Math., 2006. [9] A. Caponnetto and Y. Yao. Cross-validation based adaptation for regularization operators in learning theory. Anal. Appl., 08:161–183, 2010. [10] N. Cesa-Bianchi, A. Conconi, and C. Gentile. On the generalization ability of on-line learning algorithms. IEEE Trans. Information Theory, 50(9):2050–2057, 2004. [11] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [12] F. Cucker and D. X. Zhou. Learning Theory: An Approximation Theory Viewpoint. Cambridge University Press, 2007. [13] E. De Vito, L. Rosasco, A. Caponnetto, U. De Giovannini, and F. Odone. Learning from examples as an inverse problem. J.Mach. Learn. Res., 6:883–904, 2005. [14] E. De Vito, L. Rosasco, A. Caponnetto, M. Piana, and A. Verri. Some properties of regularized kernel methods. Journal of Machine Learning Research, 5:1363–1390, 2004. [15] H. W. Engl, M. Hanke, and A. Neubauer. Regularization of inverse problems. Kluwer, 1996. [16] P.-S. Huang, H. Avron, T. Sainath, V. Sindhwani, and B. Ramabhadran. Kernel methods match deep neural networks on timit. In IEEE ICASSP, 2014. [17] W. Jiang. Process consistency for adaboost. Ann. Stat., 32:13–29, 2004. [18] Y. LeCun, L. Bottou, G. Orr, and K. Muller. Efficient backprop. In G. Orr and Muller K., editors, Neural Networks: Tricks of the trade. Springer, 1998. [19] A. Nedic and D. P Bertsekas. Incremental subgradient methods for nondifferentiable optimization. SIAM Journal on Optimization, 12(1):109–138, 2001. [20] A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. Optim., 19(4):1574–1609, 2008. [21] A. Nemirovskii. The regularization properties of adjoint gradient method in ill-posed problems. USSR Computational Mathematics and Mathematical Physics, 26(2):7–16, 1986. [22] F. Orabona. Simultaneous model selection and optimization through parameter-free stochastic learning. NIPS Proceedings, 2014. [23] I. Pinelis. Optimum bounds for the distributions of martingales in Banach spaces. Ann. Probab., 22(4):1679–1706, 1994. [24] B. Polyak. Introduction to Optimization. Optimization Software, New York, 1987. [25] J. Ramsay and B. Silverman. Functional Data Analysis. Springer-Verlag, New York, 2005. [26] G. Raskutti, M. Wainwright, and B. Yu. Early stopping for non-parametric regression: An optimal datadependent stopping rule. In in 49th Annual Allerton Conference, pages 1318–1325. IEEE, 2011. [27] S. Smale and D. Zhou. Shannon sampling II: Connections to learning theory. Appl. Comput. Harmon. Anal., 19(3):285–302, November 2005. [28] S. Smale and D.-X. Zhou. Learning theory estimates via integral operators and their approximations. Constr. Approx., 26(2):153–172, 2007. [29] N. Srebro, K. Sridharan, and A. Tewari. Optimistic rates for learning with a smooth loss. arXiv:1009.3896, 2012. [30] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008. [31] I. Steinwart, D. R. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT, 2009. [32] P. Tarr`es and Y. Yao. Online learning as stochastic approximation of regularization paths: optimality and almost-sure convergence. IEEE Trans. Inform. Theory, 60(9):5716–5735, 2014. [33] V. Vapnik. Statistical learning theory. Wiley, New York, 1998. [34] Y. Yao, L. Rosasco, and A. Caponnetto. On early stopping in gradient descent learning. Constr. Approx., 26:289–315, 2007. [35] Y. Ying and M. Pontil. Online gradient descent learning algorithms. Found. Comput. Math., 8:561–596, 2008. [36] T. Zhang and B. Yu. Boosting with early stopping: Convergence and consistency. Annals of Statistics, pages 1538–1579, 2005. 9 | 2015 | 40 |
5,925 | Cross-Domain Matching for Bag-of-Words Data via Kernel Embeddings of Latent Distributions Yuya Yoshikawa∗ Nara Institute of Science and Technology Nara, 630-0192, Japan yoshikawa.yuya.yl9@is.naist.jp Tomoharu Iwata NTT Communication Science Laboratories Kyoto, 619-0237, Japan iwata.tomoharu@lab.ntt.co.jp Hiroshi Sawada NTT Service Evolution Laboratories Kanagawa, 239-0847, Japan sawada.hiroshi@lab.ntt.co.jp Takeshi Yamada NTT Communication Science Laboratories Kyoto, 619-0237, Japan yamada.tak@lab.ntt.co.jp Abstract We propose a kernel-based method for finding matching between instances across different domains, such as multilingual documents and images with annotations. Each instance is assumed to be represented as a multiset of features, e.g., a bag-ofwords representation for documents. The major difficulty in finding cross-domain relationships is that the similarity between instances in different domains cannot be directly measured. To overcome this difficulty, the proposed method embeds all the features of different domains in a shared latent space, and regards each instance as a distribution of its own features in the shared latent space. To represent the distributions efficiently and nonparametrically, we employ the framework of the kernel embeddings of distributions. The embedding is estimated so as to minimize the difference between distributions of paired instances while keeping unpaired instances apart. In our experiments, we show that the proposed method can achieve high performance on finding correspondence between multi-lingual Wikipedia articles, between documents and tags, and between images and tags. 1 Introduction The discovery of matched instances in different domains is an important task, which appears in natural language processing, information retrieval and data mining tasks such as finding the alignment of cross-lingual sentences [1], attaching tags to images [2] or text documents [3], and matching user identifications in different databases [4]. When given an instance in a source domain, our goal is to find the instance in a target domain that is the most closely related to the given instance. In this paper, we focus on a supervised setting, where correspondence information between some instances in different domains is given. To find matching in a single domain, e.g., find documents relevant to an input document, a similarity (or distance) measure between instances can be used. On the other hand, when trying to find matching between instances in different domains, we cannot directly measure the distances since they consist of different types of features. For example, when matching documents in different languages, since the documents have different vocabularies we cannot directly measure the similarities between documents across different languages without dictionaries. ∗The author moved to Software Technology and Artificial Intelligence Research Laboratory (STAIR Lab) at Chiba Institute of Technology, Japan. 1 Figure 1: An example of the proposed method used on a multilingual document matching task. Correspondences between instances in source (English) and target (Japanese) domains are observed. The proposed method assumes that each feature (vocabulary term) has a latent vector in a shared latent space, and each instance is represented as a distribution of the latent vectors of the features associated with the instance. Then, the distribution is mapped as an element in a reproducing kernel Hilbert space (RKHS) based on the kernel embeddings of distributions. The latent vectors are estimated so that the paired instances are embedded closer together in the RKHS. One solution is to map instances in both the source and target domains into a shared latent space. One such method is canonical correspondence analysis (CCA) [5], which maps instances into a latent space by linear projection to maximize the correlation between paired instances in the latent space. However, in practice, CCA cannot solve non-linear relationship problems due to its linearity. To find non-linear correspondence, kernel CCA [6] can be used. It has been reported that kernel CCA performs well as regards document/sentence alignment between different languages [7, 8], when searching for images from text queries [9] and when matching 2D-3D face images [10]. Note that the performance of kernel CCA depends on how appropriately we define the kernel function for measuring the similarity between instances within a domain. Many kernels, such as linear, polynomial and Gaussian kernels, cannot consider the occurrence of different but semantically similar words in two instances because these kernels use the inner-product between the feature vectors representing the instances. For example, words, ‘PC’ and ‘Computer’, are different but indicate the same meaning. Nevertheless, the kernel value between instances consisting only of ‘PC’ and consisting only of ‘Computer’ is equal to zero with linear and polynomial kernels. Even if a Gaussian kernel is used, the kernel value is determined only by the vector length of the instances. In this paper, we propose a kernel-based cross-domain matching method that can overcome the problem of kernel CCA. Figure 1 shows an example of the proposed method. The proposed method assumes that each feature in source and target domains is associated with a latent vector in a shared latent space. Since all the features are mapped into the latent space, the proposed method can measure the similarity between features in different domains. Then, each instance is represented as a distribution of the latent vectors of features that are contained in the instance. To represent the distributions efficiently and nonparametrically, we employ the framework of the kernel embeddings of distributions, which measures the difference between distributions in a reproducing kernel Hilbert space (RKHS) without the need to define parametric distributions. The latent vectors are estimated by minimizing the differences between the distributions of paired instances while keeping unpaired instances apart. The proposed method can discover unseen matching in test data by using the distributions of the estimated latent vectors. We will explain matching between two domains below, however, the proposed method can be straightforwardly extended to matching between three and more domains by regarding one of the domains as a pivot domain. In our experiments, we demonstrate the effectiveness of our proposed method in tasks that involve finding the correspondence between multi-lingual Wikipedia articles, between documents and tags, and between images and tags, by comparison with existing linear and non-linear matching methods. 2 Related Work As described above, canonical correlation analysis (CCA) and kernel CCA have been successfully used for finding various types of cross-domain matching. When we want to match cross-domain instances represented by bag-of-words such as documents, bilingual topic models [1, 11] can also be used. The difference between the proposed method and these methods is that since the proposed method represents each instance as a set of latent vectors of its own features, the proposed method can learn a more complex representation of the instance than these existing methods that represent 2 each instance as a single latent vector. Another difference is that the proposed method employs a discriminative approach, while kernel CCA and bilingual topic models employ generative ones. To model cross-domain data, deep learning and neural network approaches have been recently proposed [12, 13]. Unlike such approaches, the proposed method performs non-linear matching without deciding the number of layers of the networks, which largely affects their performances. A key technique of the proposed method is the kernel embeddings of distributions [14], which can represent a distribution as an element in an RKHS, while preserving the moment information of the distribution such as the mean, covariance and higher-order moments without density estimation. The kernel embeddings of distributions have been successfully used for a statistical test of the independence of two sample sets [15], discriminative learning on distribution data [16], anomaly detection for group data [17], density estimation [18] and a three variable interaction test [19]. Most previous studies about the kernel embeddings of distributions consider cases where the distributions are unobserved but the samples generated from the distributions are observed. Additionally, each of the samples is represented as a dense vector. With the proposed method, the kernel embedding technique cannot be used to represent the observed multisets of features such as bag-of-words for documents, since each of the features is represented as a one-hot vector whose dimensions are zero except for the dimension indicating that the feature has one. In this study, we benefit from the kernel embeddings of distributions by representing each feature as a dense vector in a shared latent space. The proposed method is inspired by the use of the kernel embeddings of distributions in bag-ofwords data classification [20] and regression [21]. Their methods can be applied to single domain data, and the latent vectors of features are used to measure the similarity between the features in a domain. Unlike these methods, the proposed method is used for the cross-domain matching of two different types of domain data, and the latent vectors are used for measuring the similarity between the features in different domains. 3 Kernel Embeddings of Distributions In this section, we introduce the framework of the kernel embeddings of distributions. The kernel embeddings of distributions are used to embed any probability distribution P on space X into a reproducing kernel Hilbert space (RKHS) Hk specified by kernel k, and the distribution is represented as element m∗(P) in the RKHS. More precisely, when given distribution P, the kernel embedding of the distribution m∗(P) is defined as follows: m∗(P) := Ex∼P[k(·, x)] = ∫ X k(·, x)dP ∈Hk, (1) where kernel k is referred to as embedding kernel. It is known that kernel embedding m∗(P) preserves the properties of probability distribution P such as the mean, covariance and higher-order moments by using characteristic kernels (e.g., Gaussian RBF kernel) [22]. When a set of samples X = {xl}n l=1 is drawn from the distribution, by interpreting sample set X as empirical distribution ˆP = 1 n ∑n l=1 δxl(·), where δx(·) is the Dirac delta function at point x ∈X, empirical kernel embedding m(X) is given by m(X) = 1 n n ∑ l=1 k(·, xl), (2) which can be approximated with an error rate of ||m(X)−m∗(P)||Hk = Op(n−1 2 ) [14]. Unlike kernel density estimation, the error rate of the kernel embeddings is independent of the dimensionality of the given distribution. 3.1 Measuring Difference between Distributions By using the kernel embedding representation Eq. (2), we can measure the difference between two distributions. Given two sets of samples X = {xl}n l=1 and Y = {yl′}n′ l′=1 where xl and yl′ belong to the same space, we can obtain their kernel embedding representations m(X) and m(Y). Then, the difference between m(X) and m(Y) is given by D(X, Y) = ||m(X) −m(Y)||2 Hk. (3) Intuitively, it reflects the difference in the moment information of the distributions. The difference is equivalent to the square of maximum mean discrepancy (MMD), which is used for a statistical test 3 of independence of two distributions [15]. The difference can be calculated by expanding Eq. (3) as follows: ||m(X) −m(Y)||2 Hk = ⟨m(X), m(X)⟩Hk + ⟨m(Y), m(Y)⟩Hk −2⟨m(X), m(Y)⟩Hk, (4) where, ⟨·, ·⟩Hk is an inner-product in the RKHS. In particular, ⟨m(X), m(Y)⟩Hk is given by ⟨m(X), m(Y)⟩Hk = ⟨ 1 n n ∑ l=1 k(·, xl), 1 n′ n′ ∑ l′=1 k(·, yl′) ⟩ Hk = 1 nn′ n ∑ l=1 n′ ∑ l′=1 k(xl, yl′). (5) ⟨m(X), m(X)⟩Hk and ⟨m(Y), m(Y)⟩Hk can also be calculated by Eq. (5). 4 Proposed Method Suppose that we are given a training set consisting of N instance pairs O = {(ds i, dt i)}N i=1, where ds i is the ith instance in a source domain and dt i is the ith instance in a target domain. These instances ds i and dt i are represented as multisets of features included in source feature set Fs and target feature set Ft, respectively. This means that these instances are represented as bag-of-words (BoW). The goal of our task is to determine the unseen relationship between instances across source and target domains in test data. The number of instances in the source domain may be different to that in the target domain. 4.1 Kernel Embeddings of Distributions in a Shared Latent Space As described in Section 1, the difficulty as regards finding cross-domain instance matching is that the similarity between instances across source and target domains cannot be directly measured. We have also stated that although we can find a latent space that can measure the similarity by using kernel CCA, standard kernel functions, e.g., a Gaussian kernel, cannot reflect the co-occurrence of different but related features in a kernel calculation between instances. To overcome them, we propose a new data representation for finding cross-domain instance matching. The proposed method assumes that each feature in a source feature set, f ∈Fs, has a q-dimensional latent vector xf ∈Rq in a shared space. Likewise, each feature in target feature set, g ∈Ft, has a q-dimensional latent vector yg ∈Rq in the shared space. Since all the features in the source and target domains are mapped into a common shared space, the proposed method can capture the relationship between features both in each domain and across different domains. We define the sets of latent vectors in the source and target domains as X = {xf}f∈Fs and Y = {yg}g∈Ft, respectively. The proposed method assumes that each instance is represented by a distribution (or multiset) of the latent vectors of the features that are contained in the instance. The ith instance in the source domain ds i is represented by a set of latent vectors Xi = {xf}f∈ds i and the jth instance in the target domain dt j is represented by a set of latent vectors Yj = {yg}g∈dt j. Note that Xi and Yj lie in the same latent space. In Section 3, we introduced the kernel embedding representation of a distribution and described how to measure the difference between two distributions when samples generated from the distribution are observed. In the proposed method, we employ the kernel embeddings of distributions to represent the distributions of the latent vectors for the instances. The kernel embedding representations for the ith source and the jth target domain instances are given by m(Xi) = 1 |ds i| ∑ f∈ds i k(·, xf), m(Yj) = 1 |dt j| ∑ g∈dt j k(·, yg). (6) Then, the difference between the distributions of the latent vectors are measured by using Eq. (3), that is, the difference between the ith source and the jth target domain instances is given by D(Xi, Yj) = ||m(Xi) −m(Yj)||2 Hk. (7) 4.2 Model The proposed method assumes that paired instances have similar distributions of latent vectors and unpaired instances have different distributions. In accordance with the assumption, we define the likelihood of the relationship between the ith source domain instance and the jth target domain instance as follows: p(dt j|ds i, X, Y, θ) = exp (−D(Xi, Yj)) ∑N j′=1 exp (−D(Xi, Yj′)) , (8) 4 where, θ is a set of hyper-parameters for the embedding kernel used in Eq. (6). Eq. (8) is in fact the conditional probability with which the jth target domain instance is chosen given the ith source domain instance. This formulation is more efficient than we consider a bidirectional matching. Intuitively, when distribution Xi is more similar to Yj than other distributions {Yj′ | j′ ̸= j}N j′=1, the probability has a higher value. We define the posterior distribution of latent vectors X and Y. By placing Gaussian priors with precision parameter ρ > 0 for X and Y, that is, p(X|ρ) ∝∏ x∈X exp ( −ρ 2||x||2 2 ) , p(Y|ρ) ∝ ∏ y∈Y exp ( −ρ 2||y||2 2 ) , the posterior distribution is given by p(X, Y|O, Θ) = 1 Z p(X|ρ)p(Y|ρ) N ∏ i=1 p(dt i|ds i, X, Y, θ), (9) where, O = {(ds i, dt i)}N i=1 is a training set of N instance pairs, Θ = {θ, ρ} is a set of hyperparameters and Z = ∫∫ p(X, Y, O, Θ)dXdY is a marginal probability, which is constant with respect to X and Y. 4.3 Learning We estimate latent vectors X and Y by maximizing the posterior probability of the latent vectors given by Eq. (9). Instead of Eq. (9), we consider the following negative logarithm of the posterior probability, L(X, Y) = N ∑ i=1 D(Xi, Yi) + log N ∑ j=1 exp (−D(Xi, Yj)) + ρ 2 ∑ x∈X ||x||2 2 + ∑ y∈Y ||y||2 2 , (10) and minimize it with respect to the latent vectors. Here, maximizing Eq. (9) is equivalent to minimizing Eq. (10). To minimize Eq. (10) with respect to X and Y, we perform a gradient-based optimization. The gradient of Eq. (10) with respect to each xf ∈X is given by ∂L(X, Y) ∂xf = ∑ i:f∈ds i ∂D(Xi, Yi) ∂xf −1 ci N ∑ j=1 eij ∂D(Xi, Yj) ∂xf + ρxf (11) where, eij = exp (−D(Xi, Yj)) , ci = N ∑ j=1 exp (−D(Xi, Yj)) , (12) and the gradient of the difference between distributions Xi and Yj with respect to xf is given by ∂D(Xi, Yj) ∂xf = 1 |ds i|2 ∑ l∈ds i ∑ l′∈ds i ∂k(xl, xl′) ∂xf − 2 |ds i||dt j| ∑ l∈ds i ∑ g∈dt i ∂k(xl, yg) ∂xf . (13) When the distribution Xi does not include the latent vector xf, the gradient consistently becomes a zero vector. ∂k(xl,xl′) ∂xf is the gradient of an embedding kernel. This depends on the choice of kernel. When the embedding kernel is a Gaussian kernel, the gradient is calculated as with Eq. (15) in [21]. Similarly, The gradient of Eq. (10) with respect to each yg ∈Y is given by ∂L(X, Y) ∂yg = N ∑ i=1 ∂D(Xi, Yi) ∂yg −1 ci ∑ j:g∈dt j eij ∂D(Xi, Yj) ∂yg + ρyg, (14) where, the gradient of the difference between distributions Xi and Yj with respect to yg can be calculated as with Eq. (13) Learning is performed by alternately updating X using Eq. (11) and updating Y using Eq. (14) until the improvement in the negative log likelihood Eq. (10) converges. 4.4 Matching After the estimation of the latent vectors X and Y, the proposed method can reveal the matching between test instances. The matching is found by first measuring the difference between a given source domain instance and target domain instances using Eq. (7), and then searching for the instance pair with the smallest difference. 5 5 Experiments In this section, we report our experimental results for three different types of cross-domain datasets: multi-lingual Wikipedia, document-tag and image-tag datasets. Setup of proposed method. Throughout these experiments, we used a Gaussian kernel with parameter γ ≥0: k(xf, yg) = exp ( −γ 2 ||xf −yg||2 2 ) as an embedding kernel. The hyper-parameters of the proposed method are the dimensionality of a shared latent space q, a regularizer parameter for latent vectors ρ and a Gaussian embedding kernel parameter γ. After we train the proposed method with various hyper-parameters q ∈{8, 10, 12}, ρ ∈{0, 10−2, 10−1} and γ ∈{10−1, 100, · · · , 103}, we chose the optimal hyper-parameters by using validation data. When training the proposed method, we initialized latent vectors X and Y by applying principle component analysis (PCA) to a matrix concatenating two feature-frequency matrices in the source and target domains. Then, we employed the L-BFGS method [23] with gradients given by Eqs. (11) (14) to learn the latent vectors. Comparison methods. We compared the proposed method with the k-nearest neighbor method (KNN), canonical correspondence analysis (CCA), kernel CCA (KCCA), bilingual latent Dirichlet allocation (BLDA), and kernel CCA with the kernel embeddings of distributions (KED-KCCA). For a test instance in the source domain, our KNN searches for the nearest neighbor source instances in the training data, and outputs a target instance in the test data, which is located close to the target instances that are paired with the searched for source instances. CCA and KCCA first learn the projection of instances into a shared latent space using training data, and then they find matching between instances by projecting the test instances into the shared latent space. KCCA used a Gaussian kernel for measuring the similarity between instances and chose the optimal Gaussian kernel parameter and regularizer parameter by using validation data. With BLDA, we first learned the same model as [1, 11] and found matching between instances in the test data by obtaining the topic distributions of these instances from the learned model. KED-KCCA uses the kernel embeddings of distributions described in Section 3 for obtaining the kernel values between the instances. The vector representations of features were obtained by applying singular value decomposition (SVD) for instance-feature frequency matrices. Here, we set the dimensionality of the vector representations to 100. Then, KED-KCCA learns kernel CCA with the kernel values as with the above KCCA. With CCA, KCCA, BLDA and KED-KCCA, we chose the optimal latent dimensionality (or number of topics) within {10, 20, · · · , 100} by using validation data. Evaluation method. Throughout the experiments, we quantitatively evaluated the matching performance by using the precision with which the true target instance is included in a set of R candidate instances, S(R), found by each method. More formally, the precision is given by Precision@R = 1 Nte Nte ∑ i=1 δ (ti ∈Si(R)) , (15) where, Nte is the number of test instances in the target domain, ti is the ith true target instance, Si(R) is R candidate instances of the ith source instance and δ(·) is the binary function that returns 1 if the argument is true, and 0 otherwise. 5.1 Matching between Bilingual Documents With a multi-lingual Wikipedia document dataset, we examine whether the proposed method can find the correct matching between documents written in different languages. The dataset includes 34,024 Wikipedia documents for each of six languages: German (de), English (en), Finnish (fi), French (fr), Italian (it) and Japanese (ja), and documents with the same content are aligned across the languages. From the dataset, we create 6C2 = 15 bilingual document pairs. We regard the first component of the pair as a source domain and the other as a target domain. For each of the bilingual document pairs, we randomly create 10 evaluation sets that consist of 1,000 document pairs as training data, 100 document pairs as validation data and 100 document pairs as test data. Here, each document is represented as a bag-of-words without stopwords and low frequency words. Figure 2 shows the matching precision for each of the bilingual pairs of the Wikipedia dataset. With all the bilingual pairs, the proposed method achieves significantly higher precision than the other methods with a wide range of R. Table 1 shows examples of predicted matching with the Japanese-English Wikipedia dataset. Compared with KCCA, which is the second best method, the 6 Figure 2: Precision of matching prediction and its standard deviation on multi-lingual Wikipedia datasets. Table 1: Top five English documents matched by the proposed method and KCCA given five Japanese documents in the Wikipedia dataset. Titles in bold typeface indicate correct matching. (a) Japanese Input title: SD カード(SD card) Proposed Intel, SD card, Libavcodec, MPlayer, Freeware KCCA BBC World News, SD card, Morocco, Phoenix, 24 Hours of Le Mans (b) Japanese Input title: 炭疽症(Anthrax) Proposed Psittacosis, Anthrax, Dehydration, Isopoda, Cataract KCCA Dehydration, Psittacosis, Cataract, Hypergeometric distribution, Long Island Iced Tea (c) Japanese Input title: ドップラー効果(Doppler effect) Proposed LU deconmposition, Redshift, Doppler effect, Phenylalanine, Dehydration KCCA Long Island Iced Tea, Opportunity cost, Cataract, Hypergeometric distribution, Intel (d) Japanese Input title: メキシコ料理(Mexican cuisine) Proposed Mexican cuisine, Long Island Iced Tea, Phoenix, Baldr, China Radio International KCCA Taoism, Chariot, Anthrax, Digital Millennium Copyright Act, Alexis de Tocqueville (e) Japanese Input title: フリーウェア(Freeware) Proposed BBC World News, Opportunity cost, Freeware, NFS, Intel KCCA Digital Millennium Copyright Act, China Radio International, Hypergeometric distribution, Taoism, Chariot proposed method can find both the correct document and many related documents. For example, in Table 1(a), the correct document title is “SD card”. The proposed method outputs the SD card’s document and documents related to computer technology such as “Intel” and “MPlayer”. This is because the proposed method can capture the relationship between words and reflect the difference between documents across different domains by learning the latent vectors of the words. 5.2 Matching between Documents and Tags, and between Images and Tags We performed experiments matching documents and tailgates, and matching images and tailgates with the datasets used in [3]. When matching documents and tailgates, we use datasets obtained from two social bookmarking sites, delicious1 and hatena2, and patent dataset. The delicious and the hatena datasets include pairs consisting of a web page and a tag list labeled by users, and the patent dataset includes pairs consisting of a patent description and a tag list representing the category of the patent. Each web page and each patent description are represented 1https://delicious.com/ 2http://b.hatena.ne.jp/ 7 Figure 3: Precision of matching prediction and its standard deviation on delicious, hatena, patent and flickr datasets. Figure 4: Two examples of input tag lists and the top five images matched by the proposed method on the flickr dataset. as a bag-of-words as with the experiments using the Wikipedia dataset, and the tag list is represented as a set of tags. With the matching of images and tag lists, we use the flickr dataset, which consists of pairs of images and tag lists. Each image is represented as a bag-of-visual-words, which is obtained by first extracting features using SIFT, and then applying K-means clustering with 200 components to the SIFT features. For all the datasets, the numbers of training, test and validation pairs are 1,000, 100 and 100, respectively. Figure 3 shows the precision of the matching prediction of the proposed and comparison methods for the delicious, hatena, patent and flickr datasets. The precision of the comparison methods with these datasets was much the same as the precision of random prediction. Nevertheless, the proposed method achieved very high precision particularly for the delicious, hatena and patent datasets. Figure 4 shows examples of input tag lists and the top five images matched by the proposed method with the flickr dataset. In the examples, the proposed method found the correct images and similar related images from given tag lists. 6 Conclusion We have proposed a novel kernel-based method for addressing cross-domain instance matching tasks with bag-of-words data. The proposed method represents each feature in all the domains as a latent vector in a shared latent space to capture the relationship between features. Each instance is represented by a distribution of the latent vectors of features associated with the instance, which can be regarded as samples from the unknown distribution corresponding to the instance. To calculate difference between the distributions efficiently and nonparametrically, we employ the framework of kernel embeddings of distributions, and we learn the latent vectors so as to minimize the difference between the distributions of paired instances in a reproducing kernel Hilbert space. Experiments on various types of cross-domain datasets confirmed that the proposed method significantly outperforms the existing methods for cross-domain matching. Acknowledgments. This work was supported by JSPS Grant-in-Aid for JSPS Fellows (259867). 8 References [1] T Zhang, K Liu, and J Zhao. Cross Lingual Entity Linking with Bilingual Topic Model. In Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence, 2013. [2] Yunchao Gong, Qifa Ke, Michael Isard, and Svetlana Lazebnik. A Multi-View Embedding Space for Modeling Internet Images, Tags, and Their Semantics. International Journal of Computer Vision, 106(2):210–233, oct 2013. [3] Tomoharu Iwata, T. Yamada, and N. Ueda. Modeling Social Annotation Data with Content Relevance using a Topic Model. In Advances in Neural Information Processing Systems. Citeseer, 2009. [4] Bin Li, Qiang Yang, and Xiangyang Xue. Transfer Learning for Collaborative Filtering via a RatingMatrix Generative Model. In Proceedings of the 26th Annual International Conference on Machine Learning, 2009. [5] H. Hotelling. Relations Between Two Sets of Variants. Biometrika, 28:321–377, 1936. [6] S Akaho. A Kernel Method for Canonical Correlation Analysis. In Proceedings of International Meeting on Psychometric Society, number 4, 2001. [7] Alexei Vinokourov, John Shawe-Taylor, and Nello Cristianini. Inferring a Semantic Representation of Text via Cross-Language Correlation Analysis. In Advances in Neural Information Processing Systems, 2003. [8] Yaoyong Li and John Shawe-Taylor. Using KCCA for Japanese-English Cross-Language Information Retrieval and Document Classification. Journal of Intelligent Information Systems, 27(2):117–133, sep 2006. [9] Nikhil Rasiwasia, Jose Costa Pereira, Emanuele Coviello, Gabriel Doyle, Gert R.G. Lanckriet, Roger Levy, and Nuno Vasconcelos. A New Approach to Cross-Modal Multimedia Retrieval. In Proceedings of the International Conference on Multimedia, 2010. [10] Patrik Kamencay, Robert Hudec, Miroslav Benco, and Martina Zachariasov. 2D-3D Face Recognition Method Based on a Modified CCA-PCA Algorithm. International Journal of Advanced Robotic Systems, 2014. [11] Tomoharu Iwata, Shinji Watanabe, and Hiroshi Sawada. Fashion Coordinates Recommender System Using Photographs from Fashion Magazines. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence. AAAI Press, jul 2011. [12] Jiquan Ngiam, Aditya Khosla, Mingyu Kim, Juhan Nam, Honglak Lee, and Andrew Y Ng. Multimodal Deep Learning. In Proceedings of The 28th International Conference on Machine Learning, pages 689– 696, 2011. [13] Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Livescu. Deep Canonical Correlation Analysis. In Proceedings of The 30th International Conference on Machine Learning, pages 1247–1255, 2013. [14] Alex Smola, Arthur Gretton, Le Song, and Bernhard Sch¨olkopf. A Hilbert Space Embedding for Distributions. In Algorithmic Learning Theory. 2007. [15] A. Gretton, K. Fukumizu, C.H. Teo, L. Song, B. Sch¨olkopf, and A.J. Smola. A Kernel Statistical Test of Independence. In Advances in Neural Information Processing Systems, 2008. [16] Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Sch¨olkopf. Learning from Distributions via Support Measure Machines. In Advances in Neural Information Processing Systems, 2012. [17] Krikamol Muandet and Bernhard Sch¨olkopf. One-Class Support Measure Machines for Group Anomaly Detection. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, 2013. [18] M Dudik, S J Phillips, and R E Schapire. Maximum Entropy Density Estimation with Generalized Regularization and an Application to Species Distribution Modeling. Journal of Machine Learning Research, 8:1217–1260, 2007. [19] Dino Sejdinovic, Arthur Gretton, and Wicher Bergsma. A Kernel Test for Three-Variable Interactions. In Advances in Neural Information Processing Systems, 2013. [20] Yuya Yoshikawa, Tomoharu Iwata, and Hiroshi Sawada. Latent Support Measure Machines for Bag-ofWords Data Classification. In Advances in Neural Information Processing Systems, 2014. [21] Yuya Yoshikawa, Tomoharu Iwata, and Hiroshi Sawada. Non-linear Regression for Bag-of-Words Data via Gaussian Process Latent Variable Set Model. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, 2015. [22] Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Sch¨olkopf, and Gert R. G. Lanckriet. Hilbert Space Embeddings and Metrics on Probability Measures. The Journal of Machine Learning Research, 11:1517–1561, 2010. [23] Dong C. Liu and Jorge Nocedal. On the Limited Memory BFGS Method for Large Scale Optimization. Mathematical Programming, 45(1-3):503–528, aug 1989. 9 | 2015 | 400 |
5,926 | Learning Large-Scale Poisson DAG Models based on OverDispersion Scoring Gunwoong Park Department of Statistics University of Wisconsin-Madison Madison, WI 53706 parkg@stat.wisc.edu Garvesh Raskutti Department of Statistics Department of Computer Science Wisconsin Institute for Discovery, Optimization Group University of Wisconsin-Madison Madison, WI 53706 raskutti@cs.wisc.edu Abstract In this paper, we address the question of identifiability and learning algorithms for large-scale Poisson Directed Acyclic Graphical (DAG) models. We define general Poisson DAG models as models where each node is a Poisson random variable with rate parameter depending on the values of the parents in the underlying DAG. First, we prove that Poisson DAG models are identifiable from observational data, and present a polynomial-time algorithm that learns the Poisson DAG model under suitable regularity conditions. The main idea behind our algorithm is based on overdispersion, in that variables that are conditionally Poisson are overdispersed relative to variables that are marginally Poisson. Our algorithms exploits overdispersion along with methods for learning sparse Poisson undirected graphical models for faster computation. We provide both theoretical guarantees and simulation results for both small and large-scale DAGs. 1 Introduction Modeling large-scale multivariate count data is an important challenge that arises in numerous applications such as neuroscience, systems biology and amny others. One approach that has received significant attention is the graphical modeling framework since graphical models include a broad class of dependence models for different data types. Broadly speaking, there are two sets of graphical models: (1) undirected graphical models or Markov random fields and (2) directed acyclic graphical (DAG) models or Bayesian networks. Between undirected graphical models and DAGs, undirected graphical models have generally received more attention in the large-scale data setting since both learning and inference algorithms scale to larger datasets. In particular, for multivariate count data Yang et al. [1] introduce undirected Poisson graphical models. Yang et al. [1] define undirected Poisson graphical models so that each node is a Poisson random variable with rate parameter depending only on its neighboring nodes in the graph. As pointed out in Yang et al. [1] one of the major challenges with Poisson undirected graphical models is ensuring global normalizability. Directed acyclic graphs (DAGs) or Bayesian networks are a different class of generative models that model directional or causal relationships (see e.g. [2, 3] for details). Such directional relationships naturally arise in most applications but are difficult to model based on observational data. One of the benefits of DAG models is that they have a straightforward factorization into conditional distributions [4], and hence no issues of normalizability arise as they do for undirected graphical models as mentioned earlier. However a number of challenges arise that make learning DAG models often impossible for large datasets even when variables have a natural causal or directional structure. 1 These issues are: (1) identifiability since inferring causal directions from data is often not possible; (2) computational complexity since it is often computationally infeasible to search over the space of DAGs [5]; (3) sample size guarantee since fundamental identifiability assumptions such as faithfulness are often required extremely large sample sizes to be satisfied even when the number of nodes p is small (see e.g. [6]). In this paper, we define Poisson DAG models and address these 3 issues. In Section 3 we prove that Poisson DAG models are identifiable and in Section 4 we introduce a polynomial-time DAG learning algorithm for Poisson DAGs which we call OverDispersion Scoring (ODS). The main idea behind proving identifiability is based on the overdispersion of variables that are conditionally Poisson but not marginally Poisson. Using overdispersion, we prove that it is possible to learn the causal ordering of Poisson DAGs using a polynomial-time algorithm and once the ordering is known, the problem of learning DAGs reduces to a simple set of neighborhood regression problems. While overdispersion with conditionally Poisson random variables is a well-known phenomena that is exploited in many applications (see e.g. [7, 8]), using overdispersion has never been exploited in DAG model learning to our knowledge. Statistical guarantees for learning the causal ordering are provided in Section 4.2 and we provide numerical experiments on both small DAGs and large-scale DAGs with node-size up to 5000 nodes. Our theoretical guarantees prove that even in the setting where the number of nodes p is larger than the sample size n, it is possible to learn the causal ordering under the assumption that the degree of the so-called moralized graph of the DAG has small degree. Our numerical experiments support our theoretical results and show that our ODS algorithm performs well compared to other state-ofthe-art DAG learning methods. Our numerical experiments confirm that our ODS algorithm is one of the few DAG-learning algorithms that performs well in terms of statistical and computational complexity in the high-dimensional p > n setting. 2 Poisson DAG Models In this section, we define general Poisson DAG models. A DAG G = (V, E) consists of a set of vertices V and a set of directed edges E with no directed cycle. We usually set V = {1, 2, . . . , p} and associate a random vector (X1, X2, . . . , Xp) with probability distribution P over the vertices in G. A directed edge from vertex j to k is denoted by (j, k) or j →k. The set Pa(k) of parents of a vertex k consists of all nodes j such that (j, k) ∈E. One of the convenient properties of DAG models is that the joint distribution f(X1, X2, ..., Xp) factorizes in terms of the conditional distributions as follows [4]: f(X1, X2, ..., Xp) = Πp j=1fj(Xj|XPa(j)), where fj(Xj|XPa(j)) refers to the conditional distribution of node Xj in terms of its parents. The basic property of Poisson DAG models is that each conditional distribution fj(xj|xPa(j)) has a Poisson distribution. More precisely, for Poisson DAG models: Xj|X{1,2,...,p}\{j} ∼Poisson(gj(XPa(j))), (1) where gj(.) is an arbitrary function of XPa(j). To take a concrete example, gj(.) can represent the link function for the univariate Poisson generalized linear model (GLM) or gj(XPa(j)) = exp(θj + P k∈Pa(j) θjkXk) where (θjk)k∈Pa(j) represent the linear weights. Using the factorization (1), the overall joint distribution is: f(X1, X2, ..., Xp) = exp X j∈V θjXj + X (k,j)∈E θjkXkXj − X j∈V log Xj!− X j∈V e θj+P k∈Pa(j) θjkXk . (2) To contrast this formulation with the Poisson undirected graphical model in Yang et al. [1], the joint distribution for undirected graphical models has the form: f(X1, X2, ..., Xp) = exp X j∈V θjXj + X (k,j)∈E θjkXkXj − X j∈V log Xj! −A(θ) , (3) 2 where A(θ) is the log-partition function or the log of the normalization constant. While the two forms (2) and (3) look quite similar, the key difference is the normalization constant of A(θ) in (3) as opposed to the term P j∈V e θj+P k∈Pa(j) θkjXk in (2) which depends on X. To ensure the undirected graphical model representation in (3) is a valid distribution, A(θ) must be finite which guarantees the distribution is normalizable and Yang et al. [1] prove that A(θ) is normalizable if and only if all θ values are less than or equal to 0. 3 Identifiability In this section, we prove that Poisson DAG models are identifiable under a very mild condition. In general, DAG models can only be defined up to their Markov equivalence class (see e.g. [3]). However in some cases, it is possible to identify the DAG by exploiting specific properties of the distribution. For example, Peters and B¨uhlmann prove that for Gaussian DAGs based on structural equation models with known or the same variance, the models are identifiable [9], Shimizu et al. [10] prove identifiability for linear non-Gaussian structural equation models, and Peters et al. [11] prove identifiability of non-parametric structural equation models with additive independent noise. Here we show that Poisson DAG models are also identifiable using the idea of overdispersion. To provide intuition, we begin by showing the identifiability of a two-node Poisson DAG model. The basic idea is that the relationship between nodes X1 and X2 generates the overdispersed child variable. To be precise, consider all three models: M1 : X1 ∼Poisson(λ1), X2 ∼Poisson(λ2), where X1 and X2 are independent; M2 : X1 ∼Poisson(λ1) and X2|X1 ∼Poisson(g2(X1)); and M3 : X2 ∼Poisson(λ2) and X1|X2 ∼Poisson(g1(X2)). Our goal is to determine whether the underlying DAG model is M1, M2 or M3. X1 X2 M1 X1 X2 M2 X1 X2 M3 Figure 1: Directed graphs of M1, M2 and M3 Now we exploit the fact that for a Poisson random variable X, Var(X) = E(X), while for a distribution which is a conditionally Poisson, the variance is overdispersed relative to the mean. Hence for M1, Var(X1) = E(X1) and Var(X2) = E(X2). For M2, Var(X1) = E(X1), while Var(X2) = E[Var(X2|X1)] + Var[E(X2|X1)] = E[g2(X1)] + Var[g2(X1)] > E[g2(X1)] = E(X2), as long as Var(g2(X1)) > 0. Similarly under M3, Var(X2) = E(X2) and Var(X1) > E(X1) as long as Var(g1(X2)) > 0. Hence we can identify model M1, M2, and M3 by testing whether the variance is greater than the expectation or equal to the expectation. With finite sample size n, the quantities E(·) and Var(·) can be estimated from data and we consider the finite sample setting in Section 4 and 4.2. Now we extend this idea to provide an identifiability condition for general Poisson DAG models. The key idea to extending identifiability from the bivariate to multivariate scenario involves condition on parents of each node and then testing overdispersion. The general p-variate result is as follows: Theorem 3.1. Assume that for any j ∈V , K ⊂Pa(j) and S ⊂{1, 2, .., p} \ K, Var(gj(XPa(j))|XS) > 0, the Poisson DAG model is identifiable. We defer the proof to the supplementary material. Once again, the main idea of the proof is overdispersion. To explain the required assumption note that for any j ∈V and S ⊂Pa(j), Var(Xj|XS) −E(Xj|XS) = Var(gj(XPa(j))|XS). Note that if S = Pa(j) or {1, ...j −1}, Var(gj(XPa(j))|XS) = 0. Otherwise Var(gj(XPa(j))|XS) > 0 by our assumption. 3 1 2 3 G 1 2 3 Gm Figure 2: Moralized graph Gm for DAG G 4 Algorithm Our algorithm which we call OverDispersion Scoring (ODS) consists of three main steps: 1) estimating a candidate parents set [1, 12, 13] using existing learning undirected graph algorithms; 2) estimating a causal ordering using overdispersion scoring; and 3) estimating directed edges using standard regression algorithms such as Lasso. Steps 3) is a standard problem in which we use offthe-shelf algorithms. Step 1) allows us to reduce both computational and sample complexity by exploiting sparsity of the moralized or undirected graphical model representation of the DAG which we inroduce shortly. Step 2) exploits overdispersion to learn a causal ordering. An important concept we need to introduce for Step 1) of our algorithm is the moral graph or undirected graphical model representation of the DAG (see e.g. [14]). The moralized graph Gm for a DAG G = (V, E) is an undirected graph where Gm = (V, Eu) where Eu includes edge set E without directions plus edges between any nodes that are parents of a common child. Fig. 2 demonstrates concepts of a moralized graph for a simple 3-node example where E = {(1, 3), (2, 3)} for DAG G. Note that 1, 2 are parents of a common child 3. Hence Eu = {(1, 2), (1, 3), (2, 3)} where the additional edge (1, 2) arises from the fact that nodes 1 and 2 are both parents of node 3. Further, define N(j) := {k ∈{1, 2, ..., p} |(j, k) or (k, j) ∈Eu} denote the neighborhood set of a node j in the moralized graph Gm. Let {X(i)}n i=1 denote n samples drawn from the Poisson DAG model G. Let π : {1, 2, ..., p} →{1, 2, ..., p} be a bijective function corresponding to a permutation or a causal ordering. We will also use the convenient notationb. to denote an estimate based on the data. For ease of notation for any j ∈{1, 2, ...p}, and S ⊂{1, 2, ..., p} let µj|S and µj|S(xS) represent E(Xj|XS) and E(Xj|XS = xS), respectively. Furthermore let σ2 j|S and σ2 j|S(xS) denote Var(Xj|XS) and Var(Xj|XS = xS), respectively. We also define n(xS) = Pn i=1 1(X(i) S = xS) and nS = P xS n(xS)1(n(xS) ≥c0.n) for an arbitrary c0 ∈(0, 1). The computation of the score bsjk in Step 2) of our ODS algorithm 1 involves the following equation: bsjk = X x∈X( b Cjk) n(x) n b Cjk bσ2 j| b Cjk(x) −bµj| b Cjk(x) (4) where bCjk refers to an estimated candidate set of parents specified in Step 2) of our ODS algorithm 1 and X( bCjk) = {x ∈{X(1) b Cjk, X(2) b Cjk, ..., X(n) b Cjk} | n(x) ≥c0.n} so that we ensure we have enough samples for each element we select. In addition, c0 is a tuning parameter of our algorithm that we specify in our main Theorem 4.2 and our numerical experiments. We can use a number of standard algorithms for Step 1) of our ODS algorithm since it boils down to finding a candidate set of parents. The main purpose of Step 1) is to reduce both computational complexity and the sample complexity by exploiting sparsity in the moralized graph. In Step 1) a candidate set of parents is generated for each node which in principle could be the entire set of nodes. However since Step 2) requires computation of a conditional mean and variance, both the sample complexity and computational complexity depend significantly on the number of variables we condition on as illustrated in Section 4.1 and 4.2. Hence by making the set of candidate parents for each node as small as possible we gain significant computational and statistical improvements by exploiting the graph structure. A similar step is taken in the MMHC [15] and SC algorithms [16]. The way we choose a candidate set of parents is by learning the moralized graph Gm and then using the neighborhood set N(j) for each j. Hence Step 1) reduces to a standard undirected graphical model learning algorithm. A number of choices are available for Step 1) including the neighborhood regression approach of Yang et al. [1] as well as standard DAG learning algorithms which find a candidate parents set such as HITON [13] and MMPC [15]. 4 Algorithm 1: OverDispersion Scoring (ODS) input : n samples from the given Poisson DAG model. X(1), ..., X(n) ∈{{0} ∪N}p output: A causal ordering bπ ∈Np and a graph structure, bE ∈{0, 1}p×p Step 1: Estimate the undirected edges bEu corresponding to the moralized graph with neighborhood set b N(j); Step 2: Estimate causal ordering using overdispersion score; for i ∈{1, 2, ..., p} do bsi = bσ2 i −bµi end The first element of a causal ordering bπ1 = arg minj bsj; for j = {2, 3, ...p −1} do for k ∈N(bπj−1) ∩{1, 2, ..., p} \ {bπ1, ...bπj−1} do The candidate parents set bCjk = b N(k) ∩{bπ1, bπ2, ..., bπj−1}; Calculate bsjk using (4); end The jth element of a causal ordering bπj = arg mink bsjk; Step 3: Estimate directed edges toward bπj, denoted by bDj; end The pth element of the causal ordering bπp = {1, 2, ..., p} \ {bπ1, bπ2, ..., bπp−1}; The directed edges toward bπp, denoted by bDp = b N(bπp); Return the estimated causal ordering bπ = (bπ1, bπ2, ..., bπp); Return the estimated edge structure bE = { bD2, bD3, ..., bDp}; Step 2) learns the causal ordering by assigning an overdispersion score for each node. The basic idea is to determine which nodes are overdispersed based on the sample conditional mean and conditional variance. The causal ordering is determined one node at a time by selecting the node with the smallest overdispersion score which is representative of a node that is least likely to be conditionally Poisson and most likely to be marginally Poisson. Finding the causal ordering is usually the most challenging step of DAG learning, since once the causal ordering is learnt, all that remains is to find the edge set for the DAG. Step 3), the final step finds the directed edge set of the DAG G by finding the parent set of each node. Using Steps 1) and 2), finding the parent set of node j boils down to selecting which variables are parents out of the candidate parents of node j generated in Step 1) intersected with all elements before node j of the causal ordering in Step 2). Hence we have p regression variable selection problems which can be performed using GLMLasso [17] as well as standard DAG learning algorithms. 4.1 Computational Complexity Steps 1) and 3) use existing algorithms with known computational complexity. Clearly the computational complexity for Steps 1) and 3) depend on the choice of algorithm. For example, if we use the neighborhood selection GLMLasso algorithm [17] as is used in Yang et al. [1], the worst-case complexity is O(min(n, p)np) for a single Lasso run but since there are p nodes, the total worst-case complexity is O(min(n, p)np2). Similarly if we use GLMLasso for Step 3) the computational complexity is also O(min(n, p)np2). As we show in numerical experiments, DAG-based algorithms for Step 1) tend to run more slowly than neighborhood regression based on GLMLasso. For Step 2) where we estimate the causal ordering has (p −1) iterations and each iteration has a number of overdispersion scores bsj and bsjk computed which is bounded by O(|K|) where K is a set of candidates of each element of a causal ordering, N(bπj−1) ∩{1, 2, ..., p} \ {bπ1, ...bπj−1}, which is also bounded by the maximum degree of the moralized graph d. Hence the total number of overdispersion scores that need to be computed is O(pd). Since the time for calculating each overdispersion score which is the difference between a conditional variance and expectation is proportional to n, the time complexity is O(npd). In worst case where the degree of the moralized graph is p, the computational complexity of Step 2) is O(np2). As we discussed earlier there is a 5 significant computational saving by exploiting a sparse moralized graph which is why we perform Step 1) of the algorithm. Hence Steps 1) and 3) are the main computational bottlenecks of our ODS algorithm. The addition of Step 2) which estimates the causal ordering does not significantly add to the computational bottleneck. Consequently our ODS algorithm, which is designed for learning DAGs is almost as computationally efficient as standard methods for learning undirected graphical models. 4.2 Statistical Guarantees In this section, we show consistency of recovering a valid causal ordering recovery of our ODS algorithm under suitable regularity conditions. We begin by stating the assumptions we impose on the functions gj(.). Assumption 4.1. (A1) For all j ∈V , K ⊂Pa(j) and all S ⊂{1, 2.., p} \ K, there exists an m > 0 such that Var(gj(XPa(j))|XS) > m. (A2) For all j ∈V , there exists an M < ∞such that E[exp(gj(XPa(j)))] < M. (A1) is a stronger version of the identifiability assumption in 3.1 Var(gj(XPa(j))|XS) > 0 where since we are in the finite sample setting, we need the conditional variance to be lower bounded by a constant bounded away from 0. (A2) is a condition on the tail behavior of gj(Pa(j)) for controlling tails of the score bsjk in Step 2 of our ODS algorithm. To take a concrete example for which (A1) and (A2) are satisfied, it is straightforward to show that the GLM DAG model (2) with non-positive values of {θkj} satisfies both (A1) and (A2). The non-positivity constraint on the θ’s is sufficient but not necessary and ensures that the parameters do not grow too large. Now we present the main result under Assumptions (A1) and (A2). For general DAGs, the true causal ordering π∗is not unique. Therefore let E(π∗) denote all the causal orderings that are consistent with the true DAG G∗. Further recall that d denotes the maximum degree of the moralized graph G∗ m. Theorem 4.2 (Recovery of a causal ordering). Consider a Poisson DAG model as specified in (1), with a set of true causal orderings E(π∗) and the rate function gj(.) satisfies assumptions 4.1. If the sample size threshold parameter c0 ≤n−1/(5+d), then there exist positive constants, C1, C2, C3 such that P(ˆπ /∈E(π∗)) ≤C1exp(−C2n1/(5+d) + C3 log max{n, p}). We defer the proof to the supplementary material. The main idea behind the proof uses the overdispersion property exploited in Theorem 3.1 in combination with concentration bounds that exploit Assumption (A2). Note once again that the maximum degree d of the undirected graph plays an important role in the sample complexity which is why Step 1) is so important. This is because the size of the conditioning set depends on the degree of the moralized graph d. Hence d plays an important role in both the sample complexity and computational complexity. Theorem 4.2 can be used in combination with sample complexity guarantees for Steps 1) and 3) of our ODS algorithm to prove that our output DAG bG is the true DAG G∗with high probability. Sample complexity guarantees for Steps 1) and 3) depend on the choice of algorithm but for neighborhood regression based on the GLMLasso, provided n = Ω(d log p), Steps 1) and 3) should be consistent. For Theorem 4.2 if the triple (n, d, p) satisfies n = Ω((log p)5+d), then our ODS algorithm recovers the true DAG. Hence if the moralized graph is sparse, ODS recovers the true DAG in the highdimensional p > n setting. DAG learning algorithms that apply to the high-dimensional setting are not common since they typically rely on faithfulness or similar assumptions or other restrictive conditions that are not satisfied in the p > n setting. Note that if the DAG is not sparse and d = Ω(p), our sample complexity is extremely large when p is large. This makes intuitive sense since if the number of candidate parents is large, we would need to condition on a large set of variables which is very sample-intensive. Our sample complexity is certainly not optimal since the choice of tuning parameter c0 ≤n−1/(5+d). Determining optimal sample complexity remains an open question. 6 0 25 50 75 100 2500 5000 7500 10000 sample size Accuracy (%) Causal ordering (a) p = 10, d ≥3 0 25 50 75 100 2500 5000 7500 10000 sample size Causal ordering (b) p = 50, d ≥3 0 25 50 75 100 2500 5000 7500 10000 sample size Causal ordering (c) p = 100, d ≥3 0 20 40 60 80 2500 5000 7500 10000 sample size Accuracy (%) Causal ordering for large DAGs (d) p = 5000, d ≥3 Figure 3: Accuracy rates of successful recovery for a causal ordering via our ODS algorithm using different base algorithms The larger sample complexity of our ODS algorithm relative to undirected graphical models learning is mainly due to the fact that DAG learning is an intrinsically harder problem than undirected graph learning when the causal ordering is unknown. Furthermore note that Theorem 4.2 does not require any additional identifiability assumptions such as faithfulness which severely increases the sample complexity for large-scale DAGs [6]. 5 Numerical Experiments In this section, we support our theoretical results with numerical experiments and show that our ODS algorithm performs favorably compared to state-of-the-art DAG learning methods. The simulation study was conducted using 50 realizations of a p-node random Poisson DAG that was generated as follows. The gj(.) functions for the general Poisson DAG model (1) was chosen using the standard GLM link function (i.e.gj(XPa(j)) = exp(θj + P k∈Pa(j) θjkXk)) resulting in the GLM DAG model (2). We experimented with other choices of gj(.) but only present results for the GLM DAG model (2). Note that our ODS algorithm works well as long as Assumption 4.1 is satisfied regardless of choices of gj(.). In all results presented (θjk) parameters were chosen uniformly at random in the range θjk ∈[−1, −0.7] although any values far from zero and satisfying the assumption 4.1 work well. In fact, smaller values of θjk are more favorable to our ODS algorithm than state-of-the-art DAG learning methods because of weak dependency between nodes. DAGs are generated randomly with a fixed unique causal ordering {1, 2..., p} with edges randomly generated while respecting desired maximum degree constraints for the DAG. In our experiments, we always set the thresholding constant c0 = 0.005 although any value below 0.01 seems to work well. In Fig. 3, we plot the proportion of simulations in which our ODS algorithm recovers the correct causal ordering in order to validate Theorem 4.2. All graphs in Fig. 3 have exactly 2 parents for each node and we plot how the accuracy in recovering the true π∗varies as a function of n for n ∈{500, 1000, 2500, 5000, 10000} and for different node sizes (a) p = 10, (b) p = 50, (c) p = 100, and (d) p = 5000. As we can see, even when p = 5000, our ODS algorithm recovers the true causal ordering about 40% of the time even when n is approximately 5000 and for smaller DAGs accuracy is 100%. In each sub-figure, 3 different algorithms are used for Step 1): GLMLasso [17] where we choose λ = 0.1; MMPC [15] with α = 0.005; and HITON [13] again with α = 0.005 and an oracle where the edges for the true moralized graph is used. As Fig. 3 shows, the GLMLasso seems to be the best performing algorithm in terms of recovery so we use the GLMLasso for Steps 1) and 3) for the remaining figures. GLMLasso was also the only algorithm that scaled to the p = 5000 setting. However, it should be pointed out that GLMLasso is not necessarily consistent and it is highly depending on the choice of gj(.). Recall that the degree d refers to the maximum degree of the moralized DAG. Fig. 4 provides a comparison of how our ODS algorithm performs in terms of Hamming distance compared to the state-of-the-art PC [3], MMHC [15], GES [18], and SC [16] algorithms. For the PC, MMHC and SC algorithms, we use α = 0.005 while for the GES algorithm we use the mBDe [19] (modified Bayesian Dirichlet equivalent) score since it performs better than other score choices. We consider node sizes of p = 10 in (a) and (b) and p = 100 in (c) and (d) since many of these algorithms do not easily scale to larger node sizes. We consider two Hamming distance measures: in (a) and (c), we only measure the Hamming distance to the skeleton of the true DAG, which is the set of edges of the DAG without directions; for (b) and (d) we measure the Hamming distance for 7 0 5 10 15 2500 5000 7500 10000 sample size Normalized Hamming Dist (%) Skeletons (a) p = 10, d ≥3 0 10 20 30 2500 5000 7500 10000 sample size Directed edges (b) p = 10, d ≥3 0.0 0.5 1.0 1.5 2.0 2500 5000 7500 10000 sample size Normalized Hamming Dist (%) Skeletons (c) p = 100, d ≥3 0 1 2 3 2500 5000 7500 10000 sample size Directed edges (d) p = 100, d ≥3 Figure 4: Comparison of our ODS algorithm (black) and PC, GES, MMHC, SC algorithms in terms of Hamming distance to skeletons and directed edges. the edges with directions. The reason we consider the skeleton is because the PC does not recover all directions of the DAG. We normalize the Hamming distance by dividing by the total number of edges p 2 and p(p −1), respectively so that the overall score is a percentage. As we can see our ODS algorithm significantly out-performs the other algorithms. We can also see that as the sample size n grows, our algorithm recovers the true DAG which is consistent with our theoretical results. It must be pointed out that the choice of DAG model is suited to our ODS algorithm while these state-of-the-art algorithms apply to more general classes of DAG models. Now we consider the statistical performance for large-scale DAGs. Fig. 5 plots the statistical performance of ODS for large-scale DAGs in terms of (a) recovering the causal ordering; (b) Hamming distance to the true skeleton; (c) Hamming distance to the true DAG with directions. All graphs in Fig. 5 have exactly 2 parents for each node and accuracy varies as a function of n for n ∈{500, 1000, 2500, 5000, 10000} and for different node sizes p = {1000, 2500, 5000}. Fig. 5 shows that our ODS algorithm accurately recovers the causal ordering and true DAG models even in high dimensional setting, supporting our theoretical results 4.2. Fig. 6 shows run-time of our ODS algorithm. We measure the running time (a) by varying node size p from 10 to 125 with the fixed n = 100 and 2 parents; (b) sample size n from 100 to 2500 with the fixed p = 20 and 2 parents; (c) the number of parents of each node |Pa| from 1 to 5 with the fixed n = 5000 and p = 20. Fig. 6 (a) and (b) support the section 4.1 where the time complexity of our ODS algorithm is at most O(np2). Fig. 6 (c) shows running time is proportional to a parents size which is a minimum degree of a graph. It agrees with the time complexity of Step 2) of our ODS algorithm is O(npd). We can also see that the GLMLasso has the fastest run-time amongst all algorithms that determine the candidate parent set. 0 20 40 60 80 2500 5000 7500 10000 sample size Accuracy (%) Causal ordering for large DAGs (a) d ≥3 0.00 0.25 0.50 0.75 2500 5000 7500 10000 sample size Normalized Hamming dist (%) Skeletons for large DAGs (b) d ≥3 0.0 0.1 0.2 0.3 2500 5000 7500 10000 sample size Directed edges for large DAGs (c) d ≥3 Figure 5: Performance of our ODS algorithm for large-scale DAGs with p = 1000, 2500, 5000 0 50 100 150 25 50 75 100 125 Node size, p Running time (sec) Time complexity (a) n = 100, d ≥3 1 2 3 4 5 0 500 1000 1500 2000 Sampe size, n Time complexity (b) p = 20, d ≥3 2.5 5.0 7.5 10.0 12.5 1 2 3 4 5 Parents size, |Pa| Time complexity (c) n = 5000, p = 20 Figure 6: Time complexity of our ODS algorithm with respect to node size p, sample size n, and parents size |Pa| 8 References [1] E. Yang, G. Allen, Z. Liu, and P. K. Ravikumar, “Graphical models via generalized linear models,” in Advances in Neural Information Processing Systems, 2012, pp. 1358–1366. [2] P. Bonissone, M. Henrion, L. Kanal, and J. Lemmer, “Equivalence and synthesis of causal models,” in Uncertainty in artificial intelligence, vol. 6, 1991, p. 255. [3] P. Spirtes, C. Glymour, and R. Scheines, Causation, Prediction and Search. MIT Press, 2000. [4] S. L. Lauritzen, Graphical models. Oxford University Press, 1996. [5] D. M. Chickering, “Learning Bayesian networks is NP-complete,” in Learning from data. Springer, 1996, pp. 121–130. [6] C. Uhler, G. Raskutti, P. B¨uhlmann, B. Yu et al., “Geometry of the faithfulness assumption in causal inference,” The Annals of Statistics, vol. 41, no. 2, pp. 436–463, 2013. [7] C. B. Dean, “Testing for overdispersion in Poisson and binomial regression models,” Journal of the American Statistical Association, vol. 87, no. 418, pp. 451–457, 1992. [8] T. Zheng, M. J. Salganik, and A. Gelman, “How many people do you know in prison? Using overdispersion in count data to estimate social structure in networks,” Journal of the American Statistical Association, vol. 101, no. 474, pp. 409–423, 2006. [9] J. Peters and P. B¨uhlmann, “Identifiability of Gaussian structural equation models with equal error variances,” Biometrika, p. ast043, 2013. [10] S. Shimizu, P. O. Hoyer, A. Hyv¨arinen, and A. Kerminen, “A linear non-Gaussian acyclic model for causal discovery,” The Journal of Machine Learning Research, vol. 7, pp. 2003– 2030, 2006. [11] J. Peters, J. Mooij, D. Janzing et al., “Identifiability of causal graphs using functional models,” arXiv preprint arXiv:1202.3757, 2012. [12] I. Tsamardinos, L. E. Brown, and C. F. Aliferis, “The max-min hill-climbing Bayesian network structure learning algorithm,” Machine learning, vol. 65, no. 1, pp. 31–78, 2006. [13] C. F. Aliferis, I. Tsamardinos, and A. Statnikov, “HITON: a novel Markov Blanket algorithm for optimal variable selection,” in AMIA Annual Symposium Proceedings, vol. 2003. American Medical Informatics Association, 2003, p. 21. [14] R. G. Cowell, P. A. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter, Probabilistic Networks and Expert Systems. Springer-Verlag, 1999. [15] I. Tsamardinos and C. F. Aliferis, “Towards principled feature selection: Relevancy, filters and wrappers,” in Proceedings of the ninth international workshop on Artificial Intelligence and Statistics. Morgan Kaufmann Publishers: Key West, FL, USA, 2003. [16] N. Friedman, I. Nachman, and D. Pe´er, “Learning bayesian network structure from massive datasets: the sparse candidate algorithm,” in Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 1999, pp. 206–215. [17] J. Friedman, T. Hastie, and R. Tibshirani, “glmnet: Lasso and elastic-net regularized generalized linear models,” R package version, vol. 1, 2009. [18] D. M. Chickering, “Optimal structure identification with greedy search,” The Journal of Machine Learning Research, vol. 3, pp. 507–554, 2003. [19] D. Heckerman, D. Geiger, and D. M. Chickering, “Learning Bayesian networks: The combination of knowledge and statistical data,” Machine learning, vol. 20, no. 3, pp. 197–243, 1995. 9 | 2015 | 401 |
5,927 | Local Causal Discovery of Direct Causes and Effects Tian Gao Qiang Ji Department of ECSE Rensselaer Polytechnic Institute, Troy, NY 12180 {gaot, jiq}@rpi.edu Abstract We focus on the discovery and identification of direct causes and effects of a target variable in a causal network. State-of-the-art causal learning algorithms generally need to find the global causal structures in the form of complete partial directed acyclic graphs (CPDAG) in order to identify direct causes and effects of a target variable. While these algorithms are effective, it is often unnecessary and wasteful to find the global structures when we are only interested in the local structure of one target variable (such as class labels). We propose a new local causal discovery algorithm, called Causal Markov Blanket (CMB), to identify the direct causes and effects of a target variable based on Markov Blanket Discovery. CMB is designed to conduct causal discovery among multiple variables, but focuses only on finding causal relationships between a specific target variable and other variables. Under standard assumptions, we show both theoretically and experimentally that the proposed local causal discovery algorithm can obtain the comparable identification accuracy as global methods but significantly improve their efficiency, often by more than one order of magnitude. 1 Introduction Causal discovery is the process to identify the causal relationships among a set of random variables. It not only can aid predictions and classifications like feature selection [4], but can also help predict consequences of some given actions, facilitate counter-factual inference, and help explain the underlying mechanisms of the data [13]. A lot of research efforts have been focused on predicting causality from observational data [13, 18]. They can be roughly divided into two sub-areas: causal discovery between a pair of variables and among multiple variables. We focus on multivariate causal discovery, which searches for correlations and dependencies among variables in causal networks [13]. Causal networks can be used for local or global causal prediction, and thus they can be learned locally and globally. Many causal discovery algorithms for causal networks have been proposed, and the majority of them belong to global learning algorithms as they seek to learn global causal structures. The Spirtes-Glymour-Scheines (SGS) [18] and Peter-Clark (P-C) algorithm [19] test for the existence of edges between every pair of nodes in order to first find the skeleton, or undirected edges, of causal networks and then discover all the V-structures, resulting in a partially directed acyclic graph (PDAG). The last step of these algorithms is then to orient the rest of edges as much as possible using Meek rules [10] while maintaining consistency with the existing edges. Given a causal network, causal relationships among variables can be directly read off the structure. Due to the complexity of the P-C algorithm and unreliable high order conditional independence tests [9], several works [23, 15] have incorporated the Markov Blanket (MB) discovery into the causal discovery with a local-to-global approach. Growth and Shrink (GS) [9] algorithm uses the MBs of each node to build the skeleton of a causal network, discover all the V-structures, and then use the Meek rules to complete the global causal structure. The max-min hill climbing (MMHC) [23] algorithm also finds MBs of each variable first, but then uses the MBs as constraints to reduce the search space for the score-based standard hill climbing structure learning methods. In [15], authors 1 use Markov Blanket with Collider Sets (CS) to improve the efficiency of the GS algorithm by combining the spouse and V-structure discovery. All these local-to-global methods rely on the global structure to find the causal relationships and require finding the MBs for all nodes in a graph, even if the interest is the causal relationships between one target variable and other variables. Different MB discovery algorithms can be used and they can be divided into two different approaches: non-topology-based and topology-based. Non-topology-based methods [5, 9], used by CS and GS algorithms, greedily test the independence between each variable and the target by directly using the definition of Markov Blanket. In contrast, more recent topology-based methods [22, 1, 11] aim to improve the data efficiency while maintaining a reasonable time complexity by finding the parents and children (PC) set first and then the spouses to complete the MB. Local learning of causal networks generally aims to identify a subset of causal edges in a causal network. Local Causal Discovery (LCD) algorithm and its variants [3, 17, 7] aim to find causal edges by testing the dependence/independence relationships among every four-variable set in a causal network. Bayesian Local Causal Discovery (BLCD) [8] explores the Y-structures among MB nodes to infer causal edges [6]. While LCD/BLCD algorithms aim to identify a subset of causal edges via special structures among all variables, we focus on finding all the causal edges adjacent to one target variable. In other words, we want to find the causal identities of each node, in terms of direct causes and effects, with respect to one target node. We first use Markov Blankets to find the direct causes and effects, and then propose a new Causal Markov Blanket (CMB) discovery algorithm, which determines the exact causal identities of MB nodes of a target node by tracking their conditional independence changes, without finding the global causal structure of a causal network. The proposed CMB algorithm is a complete local discovery algorithm and can identify the same direct causes and effects for a target variable as global methods under standard assumptions. CMB is more scalable than global methods, more efficient than local-to-global methods, and is complete in identifying direct causes and effects of one target while other local methods are not. 2 Backgrounds We use V to represent the variable space, capital letters (such as X, Y ) to represent variables, bold letters (such as Z, MB) to represent variable sets, and use |Z| to represent the size of set Z. X ⊥⊥Y and X ⊥\⊥Y represent independence and dependence between X and Y , respectively. We assume readers are familar with related concepts in causal network learning, and only review a few major ones here. In a causal network or causal Bayesian Network [13], nodes correspond to the random variables in a variable set V. Two nodes are adjacent if they are connected by an edge. A directed edge from node X to node Y , (X, Y ) ∈V, indicates X is a parent or direct cause of Y and Y is a child or direct effect of X [12]. Moreover, If there is a directed path from X to Y , then X is an ancestor of Y and Y is a descendant of X. If nonadjacent X and Y have a common child, X and Y are spouses. Three nodes X, Y , and Z form a V-structure [12] if Y has two incoming edges from X and Z, forming X →Y ←Z, and X is not adjacent to Z. Y is a collider in a path if Y has two incoming edges in this path. Y with nonadjacent parents X and Z is an unshielded collider. A path J from node X and Y is blocked [12] by a set of nodes Z, if any of following holds true: 1) there is a non-collider node in J belonging to Z. 2) there is a collider node C on J such that neither C nor any of its descendants belong to Z. Otherwise, J is unblocked or active. A PDAG is a graph which may have both undirected and directed edges and has at most one edge between any pair of nodes [10]. CPDAGs [2] represent Markov equivalence classes of DAGs, capturing the same conditional independence relationships with the same skeleton but potentially different edge orientations. CPDAGs contain directed edges that has the same orientation for every DAG in the equivalent class and undirected edges that have reversible orientations in the equivalent class. Let G be the causal DAG of a causal network with variable set V and P be the joint probability distribution over variables in V. G and P satisfy Causal Markov condition [13] if and only if, ∀X ∈V, X is independent of non-effects of X given its direct causes. The causal faithfulness condition [13] states that G and P are faithful to each other, if all and every independence and conditional independence entailed by P is present in G. It enables the recovery of G from sampled data of P. Another widely-used assumption by existing causal discovery algorithms is causal sufficiency [12]. A set of variables X ⊆V is causally sufficient, if no set of two or more variables in X shares a common cause variable outside V. Without causal sufficiency assumption, latent confounders between adjacent nodes would be modeled by bi-directed edges [24]. We also assume no selection bias [20] and 2 we can capture the same independence relationships among variables from the sampled data as the ones from the entire population. Many concepts and properties of a DAG hold in causal networks, such as d-separation and MB. A Markov Blanket [12] of a target variable T, MBT , in a causal network is the minimal set of nodes conditioned on which all other nodes are independent of T, denoted as X ⊥⊥T|MBT , ∀X ⊆ {V \ T} \ MBT . Given an unknown distribution P that satisfied the Markov condition with respect to an unknown DAG G0, Markov Blanket Discovery is the process used to estimate the MB of a target node in G0, from independently and identically distributed (i.i.d) data D of P. Under the causal faithfulness assumption between G0 and P, the MB of a target node T is unique and is the set of parents, children, and spouses of T (i.e., other parents of children of T) [12]. In addition, the parents and children set of T, PCT , is also unique. Intuitively, the MB can directly facilitate causal discovery. If conditioning on the MB of a target variable T renders a variable X independent of T, then X cannot be a direct cause or effect of T. From the local causal discovery point of view, although MB may contain nodes with different causal relationships with the target, it is reasonable to believe that we can identify their relationships exactly, up to the Markov equivalence, with further tests. Lastly, exiting causal network learning algorithms all use three Meek rules [10], which we assume the readers are familiar with, to orient as many edges as possible given all V-structures in PDAGs to obtain CPDAG. The basic idea is to orient the edges so that 1) the edge directions do not introduce new V-structures, 2) preserve the no-cycle property of a DAG, and 3) enforce 3-fork V-structures. 3 Local Causal Discovery of Direct Causes and Effects Existing MB discovery algorithms do not directly offer the exact causal identities of the learned MB nodes of a target. Although the topology-based methods can find the PC set of the target within the MB set, they can only provide the causal identities of some children and spouses that form vstructures. Nevertheless, following existing works [4, 15], under standard assumptions, every PC variable of a target can only be its direct cause or effect: Theorem 1. Causality within a MB. Under the causal faithfulness, sufficiency, correct independence tests, and no selection bias assumptions, the parent and child nodes within a target’s MB set in a causal network contains all and only the direct causes and effects of the target variable. The proof can be directly derived from the PC set definition of a causal network. Therefore, using the topology-based MB discovery methods, if we can discover the exact causal identities of the PC nodes within the MB, causal discovery of direct causes and effects of the target can therefore be successfully accomplished. Building on MB discovery, we propose a new local causal discovery algorithm, Causal Markov Blanket (CMB) discovery as shown in Algorithm 1. It identifies the direct causes and effects of a target variable without the need of finding the global structure or the MBs of all other variables in a causal network. CMB has three major steps: 1) to find the MB set of the target and to identify some direct causes and effects by tracking the independence relationship changes among a target’s PC nodes before and after conditioning on the target node, 2) to repeat Step 1 but conditioned on one PC node’s MB set, and 3) to repeat Step 1 and 2 with unidentified neighboring nodes as new targets to identify more direct causes and effects of the original target. Step 1: Initial identification. CMB first finds the MB nodes of a target T, MBT , using a topologybased MB discovery algorithm that also finds PCT . CMB then uses the CausalSearch subroutine, shown in Algorithm 2, to get an initial causal identities of variables in PCT by checking every variable pair in PCT according to Lemma 1. Lemma 1. Let (X, Y ) ∈PCT , the PC set of the target T ∈V in a causal DAG. The independence relationships between X and Y can be divided into the following four conditions: C1 X ⊥⊥Y and X ⊥⊥Y |T; this condition can not happen. C2 X ⊥⊥Y and X ⊥\⊥Y |T ⇒X and Y are both the parents of T. C3 X ⊥\⊥Y and X ⊥⊥Y |T ⇒at least one of X and Y is a child of T. C4 X ⊥\⊥Y and X ⊥\⊥Y |T ⇒their identities are inconclusive and need further tests. 3 Algorithm 1 Causal Markov Blanket Discovery Algorithm 1: Input: D: Data; T: target variable 2: Output: IDT : the causal identities of all nodes with respect to T {Step 1: Establish initial ID } 3: IDT = zeros(|V|, 1); 4: (MBT , PCT ) ←FindMB(T, D); 5: Z ←∅; 6: IDT ←CausalSearch(D, T, PCT , Z, IDT ) {Step 2: Further test variables with idT = 4} 7: for one X in each pair (X, Y ) with idT = 4 do 8: MBX ←FindMB(X, D); 9: Z ←{MBX \ T} \ Y ; 10: IDT ←CausalSearch(D, T, PCT , Z, IDT ); 11: if no element of IDT is equal to 4, break; 12: for every pair of parents (X, Y ) of T do 13: if ∃Z s.t. (X, Z) and (Y, Z) are idT = 4 pairs then 14: IDT (Z) = 1; 15: IDT (X) ←3, ∀X that IDT (X) = 4; {Step 3: Resolve variable set with idT = 3} 16: for each X with idT = 3 do 17: Recursively find IDX, without going back to the already queried variables; 18: update IDT according to IDX; 19: if IDX(T) = 2 then 20: IDT (X) = 1; 21: for every Y in idT = 3 variable pairs (X, Y ) do 22: IDT (Y ) = 2; 23: if no element of IDT is equal to 3, break; 24: Return: IDT Algorithm 2 CausalSearch Subroutine 1: Input: D: Data; T: target variable; PCT : the PC set of T; Z: the conditioned variable set; ID: current ID 2: Output: IDT : the new causal identities of all nodes with respect to T {Step 1: Single PC } 3: if |PCT | = 1 then 4: IDT (PCT ) ←3; {Step 2: Check C2 & C3} 5: for every X, Y ∈PCT do 6: if X ⊥⊥Y |Z and X ⊥\⊥Y |T ∪Z then 7: IDT (X) ←1; IDT (Y ) ←1; 8: else if X ⊥\⊥Y |Z and X ⊥⊥Y |T ∪Z then 9: if IDT (X) = 1 then 10: IDT (Y ) ←2 11: else if IDT (Y ) ̸= 2 then 12: IDT (Y ) ←3 13: if IDT (Y ) = 1 then 14: IDT (X) ←2 15: else if IDT (X) ̸= 2 then 16: IDT (X) ←3 17: add (X, Y ) to pairs with idT = 3 18: else 19: if IDT (X) & IDT (Y ) = 0 or 4 then 20: IDT (X) ←4; IDT (Y ) ←4 21: add (X, Y ) to pairs with idT = 4 {Step 3: identify idT = 3 pairs with known parents} 22: for every X such that IDT (X) = 1 do 23: for every Y in idT = 3 variable pairs (X, Y ) do 24: IDT (Y ) ←2; 25: Return: IDT C1 does not happen because the path X −T −Y is unblocked either not given T or given T, and the unblocked path makes X and Y dependent on each other. C2 implies that X and Y form a V-structure with T as the corresponding collider, such as node C in Figure 1a which has two parents A and B. C3 indicates that the paths between X and Y are blocked conditioned on T, which means that either one of (X, Y ) is a child of T and the other is a parent, or both of (X, Y ) are children of T. For example, node D and F in Figure 1a satisfy this condition with respect to E. C4 shows that there may be another unblocked path from X and Y besides X −T −Y . For example, in Figure 1b, node D and C have multiple paths between them besides D −T −C. Further tests are needed to resolve this case. Notation-wise, we use IDT to represent the causal identities for all the nodes with respect to T, IDT (X) as variable X’s causal identity to T, and the small case idT as the individual ID of a node to T. We also use IDX to represent the causal identities of nodes with respect to node X. To avoid changing the already identified PCs, CMB establishes a priority system1. We use the idT = 1 to represent nodes as the parents of T, idT = 2 children of T, idT = 3 to represent a pair of nodes that cannot be both parents (and/or ambiguous pairs from Markov equivalent structures, to be discussed at Step 2), and idT = 4 to represent the inconclusiveness. A lower number id cannot be changed 1Note that the identification number is slightly different from the condition number in Lemma 1. 4 A B C D G E F T C B D A E (𝑎) (𝑏) Figure 1: a) A Sample Causal Network. b) A Sample Network with C4 nodes. The only active path between D and C conditioned on MBC \ {T, D} is D −T −C. into a higher number (shown by Line 11∼15 of Algorithm 2). If a variable pair satisfies C2, they will both be labeled as parents (Line 7 of Algorithm 2). If a variable pair satisfies C3, one of them is labeled as idT = 2 only if the other variable within the pair is already identified as a parent; otherwise, they are both labeled as idT = 3 (Line 9∼12 and 15∼17 of Algorithm 2). If a PC node remains inconclusive with idT = 0, it is labeled as idT = 4 in Line 20 of Algorithm 2. Note that if T has only one PC node, it is labeled as idT = 3 (Line 4 of Algorithm 2). Non-PC nodes always have idT = 0. Step 2: Resolve idT = 4. Lemma 1 alone cannot identify the variable pairs in PCT with idT = 4 due to other possible unblocked paths, and we have to seek other information. Fortunately, by definition, the MB set of one of the target’s PC node can block all paths to that PC node. Lemma 2. Let (X, Y ) ∈PCT , the PC set of the target T ∈V in a causal DAG. The independence relationships between X and Y , conditioned on the MB of X minus {Y, T}, MBX \ {Y, T}, can be divided into the following four conditions: C1 X ⊥⊥Y |MBX \ {Y, T} and X ⊥⊥Y |T ∪MBX \ Y ; this condition can not happen. C2 X ⊥⊥Y |MBX \ {Y, T} and X ⊥\⊥Y |T ∪MBX \ Y ⇒X and Y are both the parents of T. C3 X ⊥\⊥Y |MBX \ {Y, T} and X ⊥⊥Y |T ∪MBX \ Y ⇒at least one of X and Y is a child of T. C4 X ⊥\⊥Y |MBX \ {Y, T} and X ⊥\⊥Y |T ∪MBX \ Y ⇒then X and Y is directly connected. C1∼3 are very similar to those in Lemma 1. C4 is true because, conditioned on T and the MB of X minus Y , the only potentially unblocked paths between X and Y are X −T −Y and/or X −Y . If C4 happens, then the path X−T −Y has no impact on the relationship between X and Y , and hence X −Y must be directly connected. If X and Y are not directly connected and the only potentially unblocked path between X and Y is X −T −Y , and X and Y will be identified by Line 10 of Algorithm 1 with idT ∈{1, 2, 3}. For example in Figure 1b, conditioned on MBC \ {T, D}, i.e., {A, B}, the only path between C and D is through T. However, if X and Y are directly connected, they will remain with idT = 4 (such as node D and E from Figure 1b). In this case, X, Y , and T form a fully connected clique, and edges among the variables that form a fully connected clique can have many different orientation combinations without affecting the conditional independence relationships. Therefore, this case needs further tests to ensure Meek rules are satisfied. The third Meek rule (enforcing 3-fork V-structures) is first enforced by Line 14 of Algorithm 1. Then the rest of idT = 4 nodes are changed to have idT = 3 by Line 15 of Algorithm 1 and to be further processed (even though they could be both parents at the same time) with neighbor nodes’ causal identities. Therefore, Step 2 of Algorithm 1 makes all variable pairs with idT = 4 to become identified either as parents, children, or with idT = 3 after taking some neighbors’ MBs into consideration. Note that Step 2 of CMB only needs to find the MB’s for a small subset of the PC variables (in fact only one MB for each variable pair with idT = 4). Step 3: Resolve idT = 3. After Step 2, some PC variables may still have idT = 3. This could happen because of the existence of Markov equivalence structures. Below we show the condition under which the CMB can resolve the causal identities of all PC nodes. 5 Lemma 3. The Identifiability Condition. For Algorithm 1 to fully identify all the causal relationships within the PC set of a target T, 1) T must have at least two nonadjacent parents, 2) one of T’s single ancestors must contain at least two nonadjacent parents, or 3) T has 3 parents that form a 3-fork pattern as defined in Meeks rules. We use single ancestors to represent ancestor nodes that do not have a spouse with a mutual child that is also an ancestor of T. If the target does not meet any of the conditions in Lemma 2, C2 will never be satisfied and all PC variables within a MB will have idT = 3. Without a single parent identified, it is impossible to infer the identities of children nodes using C3. Therefore, all the identities of the PC nodes are uncertain, even though the resulting structure could be a CPDAG. Step 3 of CMB searches for a non-single ancestor of T to infer the causal directions. For each node X with idT = 3, CMB tries to identify its local causal structure recursively. If X’s PC nodes are all identified, it would return to the target with the resolved identities; otherwise, it will continue to search for a non-single ancestor of X. Note that CMB will not go back to already-searched variables with unresolved PC nodes without providing new information. Step 3 of CMB checks the identifiability condition for all the ancestors of the target. If a graph structure does not meet the conditions of Lemma 3, the final IDT will contain some idT = 3, which indicates reversible edges in CPDAGs. The found causal graph using CMB will be a PDAG after Step 2 of Algorithm 1, and it will be a CPDAG after Step 3 of Algorithm 1. Case Study. The procedure using CMB to identify the direct causes and effects of E in Figure 1a has the following 3 steps. Step 1: CMB finds the MB and PC set of E. The PC set contains node D and F. Then, IDE(D) = 3 and IDE(F) = 3. Step 2: to resolve the variable pair D and F with idE = 3, 1) CMB finds the PC set of D, containing C, E, and G. Their idD are all 3’s, since D contains only one parent. 2) To resolve IDD, CMB checks causal identities of node C and G (without going back to E). The PC set of C contains A, B, and D. CMB identifies IDC(A) = 1, IDC(B) = 1, and IDC(D) = 2. Since C resolves all its PC nodes, CMB returns to node D with IDD(C) = 1. 3) With the new parent C, IDD(G) = 2, IDD(E) = 2, and CMB returns to node E with IDE(D) = 1. Step 3: the IDE(D) = 1, and after resolving the pair with idE = 3, IDE(F) = 2. Theorem 2. The Soundness and Completeness of CMB Algorithm. If the identifiability condition is satisfied, using a sound and complete MB discovery algorithm, CMB will identify the direct causes and effects of the target under the causal faithfulness, sufficiency, correct independence tests, and no selection bias assumptions. Proof. A sound and complete MB discovery algorithm find all and only the MB nodes of a target. Using it and under the causal sufficiency assumption, the learned PC set contains all and only the cause-effect variables by Theorem 1. When Lemma 3 is satisfied, all parent nodes are identifiable through V-structure independence changes, either by Lemma 1 or by Lemma 2. Also since children cannot be conditionally independent of another PC node given its MB minus the target node (C2), all parents identified by Lemma 1 and 2 will be the true positive direct causes. Therefore, all and only the true positive direct causes will be correctly identified by CMB. Since PC variables can only be direct causes or direct effects, all and only the direct effects are identified correctly by CMB. In the cases where CMB fails to identify all the PC nodes, global causal discovery methods cannot identify them either. Specifically, structures failing to satisfy Lemma 3 can have different orientations on some edges while preserving the skeleton and v-structures, hence leading to Markov equivalent structures. For the cases where T has all single ancestors, the edge directions among all single ancestors can always be reversed without introducing new V-structures and DAG violations, in which cases the Meek rules cannot identify the causal directions either. For the cases with fully connected cliques, these fully connected cliques do not meet the nonadjacent-parents requirement for the first Meek rule (no new V-structures), and the second Meek rule (preserving DAGs) can always be satisfied within a clique by changing the direction of one edge. Since CMB orients the 3-fork V-structure in the third Meek rule correctly by Line 12∼14 of Algorithm 1, CMB can identify the same structure as the global methods that use the Meek rules. Theorem 3. Consistency between CMB and Global Causal Discovery Methods. For the same DAG G, Algorithm 1 will correctly identify all the direct causes and effects of a target variable T 6 as the global and local-to-global causal discovery methods2 that use the Meek rules [10], up to G’s CPDAG under the causal faithfulness, sufficiency, correct independence tests, and no selection bias assumptions. Proof. It has been shown that causal methods using Meek rules [10] can identify up to a graph’s CPDAG. Since Meek rules cannot identify the structures that fail Lemma 3, the global and local-toglobal methods can only identify the same structures as CMB. Since CMB is sound and complete in identifying these structures by Theorem 2, CMB will identify all direct causes and effects up to G’s CPDAG. 3.1 Complexity The complexity of CMB algorithm is dominated by the step of finding the MB, which can have an exponential complexity [1, 16]. All other steps of CMB are trivial in comparison. If we assume a uniform distribution on the neighbor sizes in a network with N nodes, then the expected time complexity of Step 1 of CMB is O( 1 N PN i=1 2i) = O( 2N N ), while local-to-global methods are O(2N). In later steps, CMB also needs to find MBs for a small subset of nodes that include 1) one node between every pair of nodes that meet C4, and 2) a subset of the target’s neighboring nodes that provide additional clues for the target. Let l be the total size of these nodes, then CMB reduces the cost by N l times asymptotically. 4 Experiments We use benchmark causal learning datasets to evaluate the accuracy and efficiency of CMB with four other causal discovery algorithms discussed: P-C, GS, MMHC, CS, and the local causal discovery algorithm LCD2 [7]. Due to page limit, we show the results of the causal algorithms on four medium-to-large datasets: ALARM, ALARM3, CHILD3, and INSUR3. They contain 37 to 111 nodes. We use 1000 data samples for all datasets. For each global or local-to-global algorithm, we find the global structure of a dataset and then extract causal identities of all nodes to a target node. CMB finds causal identities of every variable with respect to the target directly. We repeat the discovery process for each node in the datasets, and compare the discovered causal identities of all the algorithms to all the Markov equivalent structures with the known ground truth structure. We use the edge scores [15] to measure the number of missing edges, extra edges, and reversed edges3 in each node’s local causal structure and report average values along with its standard deviation, for all the nodes in a dataset. We use the existing implementation [21] of HITON-MB discovery algorithm to find the MB of a target variable for all the algorithms. We also use the existing implementations [21] for P-C, MMHC, and LCD2 algorithms. We implement GS, CS, and the proposed CMB algorithms in MATLAB on a machine with 2.66GHz CPU and 24GB memory. Following the existing protocol [15], we use the number of conditional independence tests needed (or scores computed for the score-based search method MMHC) to find the causal structures given the MBs4, and the number of times that MB discovery algorithms are invoked to measure the efficiency of various algorithms. We also use mutual-information-based conditional independence tests with a standard significance level of 0.02 for all the datasets without worrying about parameter tuning. As shown in Table 1, CMB consistently outperforms the global discovery algorithms on benchmark causal networks, and has comparable edge accuracy with local-to-global algorithms. Although CMB makes slightly more total edge errors in ALARM and ALARM3 datasets than CS, CMB is the best method on CHILD3 and INSUR3. Since LCD2 is an incomplete algorithm, it never finds extra or reversed edges but misses the most amount of edges. Efficiency-wise, CMB can achieve more than one order of magnitude speedup, sometimes two orders of magnitude as shown in CHILD3 and INSUR3, than the global methods. Compared to local-to-global methods, CMB also can achieve 2We specify the global and local-to-global causal methods to be P-C [19], GS [9] and CS [15]. 3If an edge is reversible in the equivalent class of the original graph but are not in the equivalent class of the learned graph, it is considered as reversed edges as well. 4For global methods, it is the number of tests needed or scores computed given the moral graph of the global structure. For LCD2, it would be the total number of tests since it does not use moral graph or MBs. 7 Table 1: Performance of Various Causal Discovery Algorithms on Benchmark Networks Errors: Edges Efficiency Dataset Method Extra Missing Reversed Total No. Tests No. MB Alarm P-C 1.59±0.19 2.19±0.14 0.32±0.10 4.10±0.19 4.0e3±4.0e2 MMHC 1.29±0.18 1.94±0.09 0.24±0.06 3.46±0.23 1.8e3±1.7e3 37±0 GS 0.39±0.44 0.87±0.48 1.13±0.23 2.39±0.44 586.5±72.2 37±0 CS 0.42±0.10 0.64±0.10 0.38±0.08 1.43±0.10 331.4±61.9 37± 0 LCD2 0.00±0.00 2.49±0.00 0.00±0.0 2.49±0.00 1.4e3±0 CMB 0.69±0.13 0.61±0.11 0.51±0.10 1.81±0.11 53.7±4.5 2.61 ± 0.12 Alarm3 P-C 3.71±0.57 2.21±0.25 1.37±0.04 7.30±0.68 1.6e4±4.0e2 MMHC 2.36±0.11 2.45±0.08 0.72±0.08 5.53±0.27 3.7e3±6.1e2 111 ± 0 GS 1.24±0.23 1.41±0.05 0.99±0.14 3.64±0.13 2.1e3±1.2e2 111 ± 0 CS 1.26±0.16 1.47±0.08 0.63±0.14 3.38±0.13 699.1±60.4 111±0 LCD2 0.00±0.00 3.85±0.00 0.00±0.0 3.85±0.00 1.2e4±0 CMB 1.41±0.13 1.55±0.27 0.78±0.25 3.73±0.11 50.3±6.2 2.58 ± 0.09 Child3 P-C 4.32±0.68 2.69±0.08 0.84±0.10 7.76±0.98 8.3e4±2.9e3 MMHC 1.98±0.10 1.57±0.04 0.43±0.04 4.00±0.93 6.6e3±8.2e2 60 ±0 GS 0.88±0.04 0.75±0.08 1.03±0.08 2.66±0.33 2.1e3±2.5e2 60±0 CS 0.94±0.20 0.91±0.14 0.53±0.08 2.37±0.33 1.0e3±4.8e2 60± 0 LCD2 0.00±0.00 2.63±0.00 0.00±0.0 2.63±0.00 3.6e3±0 CMB 0.92±0.12 0.84±0.16 0.60±0.10 2.36±0.31 78.2±15.2 2.53 ± 0.15 Insur3 P-C 4.76±1.33 2.50±0.11 1.29±0.11 8.55±0.81 2.5e5±1.2e4 MMHC 2.39±0.18 2.53±0.06 0.76±0.07 5.68±0.43 3.1e4±5.2e2 81 ± 0 GS 1.94±0.06 1.44±0.05 1.19±0.10 4.57±0.33 4.5e4±2.2e3 81±0 CS 1.92±0.08 1.56±0.06 0.89±0.09 4.37±0.23 2.6e4±3.9e3 81±0 LCD2 0.00±0.00 5.03±0.00 0.00±0.0 5.03±0.00 6.6e3±0 CMB 1.72±0.07 1.39±0.06 1.19±0.05 4.30±0.21 159.8±38.5 2.46 ± 0.11 more than one order of speedup on ALARM3, CHILD3, and INSUR3. In addition, on these datasets, CMB only invokes MB discovery algorithms between 2 to 3 times, drastically reducing the MB calls of local-to-global algorithms. Since independence test comparison is unfair to LCD2 who does not use MB discovery or find moral graphs, we also compared time efficiency between LCD2 and CMB. CMB is 5 times faster on ALARM, 4 times faster on ALARM3 and CHILD3, and 8 times faster on INSUR3 than LCD2. In practice, the performance of CMB depends on two factors: the accuracy of independence tests and MB discovery algorithms. First, independence tests may not always be accurate and could introduce errors while checking the four conditions of Lemma 1 and 2, especially under insufficient data samples. Secondly, causal discovery performance heavily depends on the performance of the MB discovery step, as the error could propagate to later steps of CMB. Improvements on both areas could further improve CMB’s accuracy. Efficiency-wise, CMB’s complexity can still be exponential and is dominated by the MB discovery phrase, and thus its worst case complexity could be the same as local-to-global approaches for some special structures. 5 Conclusion We propose a new local causal discovery algorithm CMB. We show that CMB can identify the same causal structure as the global and local-to-global causal discovery algorithms with the same identification condition, but uses a fraction of the cost of the global and local-to-global approaches. We further prove the soundness and completeness of CMB. Experiments on benchmark datasets show the comparable accuracy and greatly improved efficiency of CMB for local causal discovery. Possible future works could study assumption relaxations, especially without the causal sufficiency assumption, such as by using a similar procedure as FCI algorithm and the improved CS algorithm [14] to handle latent variables in CMB. 8 References [1] Constantin Aliferis, Ioannis Tsamardinos, Alexander Statnikov, C. F. Aliferis M. D, Ph. D, I. Tsamardinos Ph. D, and Er Statnikov M. S. Hiton, a novel markov blanket algorithm for optimal variable selection, 2003. [2] David Maxwell Chickering. Optimal structure identification with greedy search. Journal of Machine Learning Research, 2002. [3] Gregory F Cooper. A simple constraint-based algorithm for efficiently mining observational databases for causal relationships. Data Mining and Knowledge Discovery, 1(2):203–224, 1997. [4] Isabelle Guyon, Andre Elisseeff, and Constantin Aliferis. Causal feature selection. 2007. [5] Daphne Koller and Mehran Sahami. Toward optimal feature selection. In ICML 1996, pages 284–292. Morgan Kaufmann, 1996. [6] Subramani Mani, Constantin F Aliferis, Alexander R Statnikov, and MED NYU. Bayesian algorithms for causal data mining. In NIPS Causality: Objectives and Assessment, pages 121–136, 2010. [7] Subramani Mani and Gregory F Cooper. A study in causal discovery from population-based infant birth and death records. In Proceedings of the AMIA Symposium, page 315. American Medical Informatics Association, 1999. [8] Subramani Mani and Gregory F Cooper. Causal discovery using a bayesian local causal discovery algorithm. Medinfo, 11(Pt 1):731–735, 2004. [9] Dimitris Margaritis and Sebastian Thrun. Bayesian network induction via local neighborhoods. In Advances in Neural Information Processing Systems 12, pages 505–511. MIT Press, 1999. [10] Christopher Meek. Causal inference and causal explanation with background knowledge. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 403–410. Morgan Kaufmann Publishers Inc., 1995. [11] Teppo Niinimaki and Pekka Parviainen. Local structure disocvery in bayesian network. In Proceedings of Uncertainy in Artifical Intelligence, Workshop on Causal Structure Learning, pages 634–643, 2012. [12] Judea Pearl. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers, Inc., 2 edition, 1988. [13] Judea Pearl. Causality: models, reasoning and inference, volume 29. Cambridge Univ Press, 2000. [14] Jean-Philippe Pellet and Andr´e Elisseeff. Finding latent causes in causal networks: an efficient approach based on markov blankets. In Advances in Neural Information Processing Systems, pages 1249–1256, 2009. [15] Jean-Philippe Pellet and Andre Ellisseeff. Using markov blankets for causal structure learning. Journal of Machine Learning, 2008. [16] Jose M. Pe`oa, Roland Nilsson, Johan Bj¨orkegren, and Jesper Tegn´er. Towards scalable and data efficient learning of markov boundaries. Int. J. Approx. Reasoning, 45(2):211–232, July 2007. [17] Craig Silverstein, Sergey Brin, Rajeev Motwani, and Jeff Ullman. Scalable techniques for mining causal structures. Data Mining and Knowledge Discovery, 4(2-3):163–192, 2000. [18] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. The MIT Press, 2nd edition, 2000. [19] Peter Spirtes, Clark Glymour, Richard Scheines, Stuart Kauffman, Valerio Aimale, and Frank Wimberly. Constructing bayesian network models of gene expression networks from microarray data, 2000. [20] Peter Spirtes, Christopher Meek, and Thomas Richardson. Causal inference in the presence of latent variables and selection bias. In Proceedings of the Eleventh conference on Uncertainty in artificial intelligence, pages 499–506. Morgan Kaufmann Publishers Inc., 1995. [21] Alexander Statnikov, Ioannis Tsamardinos, Laura E. Brown, and Constatin F. Aliferis. Causal explorer: A matlab library for algorithms for causal discovery and variable selection for classification. In Causation and Prediction Challenge at WCCI, 2008. [22] Ioannis Tsamardinos, Constantin F. Aliferis, and Alexander Statnikov. Time and sample efficient discovery of markov blankets and direct causal relations. In Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, KDD ’03, pages 673–678, New York, NY, USA, 2003. ACM. [23] Ioannis Tsamardinos, LauraE. Brown, and ConstantinF. Aliferis. The max-min hill-climbing bayesian network structure learning algorithm. Machine Learning, 65(1):31–78, 2006. [24] Jiji Zhang. On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias. Artificial Intelligence, 172(16):1873–1896, 2008. 9 | 2015 | 402 |
5,928 | Recognizing retinal ganglion cells in the dark Emile Richard Stanford University emileric@stanford.edu Georges Goetz Stanford University ggoetz@stanford.edu E.J. Chichilnisky Stanford University ej@stanford.edu Abstract Many neural circuits are composed of numerous distinct cell types that perform different operations on their inputs, and send their outputs to distinct targets. Therefore, a key step in understanding neural systems is to reliably distinguish cell types. An important example is the retina, for which present-day techniques for identifying cell types are accurate, but very labor-intensive. Here, we develop automated classifiers for functional identification of retinal ganglion cells, the output neurons of the retina, based solely on recorded voltage patterns on a large scale array. We use per-cell classifiers based on features extracted from electrophysiological images (spatiotemporal voltage waveforms) and interspike intervals (autocorrelations). These classifiers achieve high performance in distinguishing between the major ganglion cell classes of the primate retina, but fail in achieving the same accuracy in predicting cell polarities (ON vs. OFF). We then show how to use indicators of functional coupling within populations of ganglion cells (cross-correlation) to infer cell polarities with a matrix completion algorithm. This can result in accurate, fully automated methods for cell type classification. 1 Introduction In the primate and human retina, roughly 20 distinct classes of retinal ganglion cells (RGCs) send distinct visual information to diverse targets in the brain [18, 7, 6]. Two complementary methods for identification of these RGC types have been pursued extensively. Anatomical studies have relied on indicators such as dendritic field size and shape, and stratification patterns in synaptic connections [8] to distinguish between cell classes. Functional studies have leveraged differences in responses to stimulation with a variety of visual stimuli [9, 3] for the same purpose. Although successful, these methods are difficult, time-consuming and require significant expertise. Thus, they are not suitable for large-scale, automated analysis of existing large-scale physiological recording data. Furthermore, in some clinical settings, they are entirely inapplicable. At least two specific scientific and engineering goals demand the development of efficient methods for cell type identification: • Discovery of new cell types. While ∼20 morphologically distinct RGC types exist, only 7 have been characterized functionally. Automated means of detecting unknown cell types in electrophysiological recordings would make it possible to process massive amounts of existing large-scale physiological data that would take too long to analyze manually, in order to search for the poorly understood RGC types. • Developing brain-machine interfaces of the future. In blind patients suffering from retinal degeneration, RGCs no longer respond to light. Advanced retinal prostheses previously demonstrated ex-vivo aim at electrically restoring the correct neural code in each RGC type in a diseased retina [11], which requires cell type identification without information about the light response properties of RGCs. In the present paper, we introduce two novel and efficient computational methods for cell type identification in a neural circuit, using spatiotemporal voltage signals produced by spiking cells 1 recorded with a high-density, large-scale electrode array [14]. We describe the data we used for our study in Section 2, and we show how the raw descriptors used by our classifiers are extracted from voltage recordings of a primate retina. We then introduce a classifier that leverages both handspecified and random-projection based features of the electrical signatures of unique RGCs, as well as large unlabeled data sets, to identify cell types (Section 3). We evaluate its performance for distinguishing between midget, parasol and small bistratified cells on manually annotated datasets. Then, in Section 4, we show how matrix completion techniques can be used to identify populations of unique cell types, and assess the accuracy of our algorithm by predicting the polarity (ON or OFF) of RGCs on datasets where a ground truth is available. Section 5 is devoted to numerical experiments that we designed to test our modeling choices. Finally, we discuss future work in Section 6. 2 Extracting descriptors from electrical recordings In this section, we define the electrical signatures that we will use in cell classification, and the algorithms that allow us to perform the statistical inference of cell type are described in the subsequent sections. We exploit three electrical signatures of recorded neurons that are well measured in large-scale, highdensity recordings. First, the electrical image (EI) of each cell, which is the average spatiotemporal pattern of voltage measured across the entire electrode array during the spiking of a cell. This measure provides information about the geometric and electrical conduction properties of the cell itself. Second, the inter-spike interval distribution (ISI), which summarizes the temporal separation between spikes emitted by the cell. This measure reflects the specific ion channels in the cell and their distribution across the cell. Third, the cross-correlation function (CCF) of firing between cells. This measure captures the degree and polarity of interactions between cells in generation of a spike. 2.1 Electrophysiological image calculation, alignment and filtering The raw data we used for our numerical experiments consist of extracellular voltage recordings of the electrical activity of retinas from male and female macaque monkeys, which were sampled and digitized at 20 kHz per channel over 512 channels laid out in a 60 µm hexagonal lattice (See Appendix for a 100 ms sample movie of an electrical recording). The emission of an action potential by a spiking neuron causes transient voltage fluctuations along its anatomical features (soma, dendritic tree, axon). By bringing an extracellular matrix of electrodes in contact with neural tissue, we capture the 2D projection of these voltage changes onto the plane of the recording electrodes (see Figure 1). With such dense multielectrode arrays, the voltage activity from a single cell is usually picked up on multiple electrodes. While the literature refers to this footprint as the electrophysiological or electrical image (EI) of the cell [13], it is an inherently spatiotemporal characteristic of the neuron, due to the transient nature of action potentials. In essence, it is a short movie (∼1.5 ms) of the average electrical activity over the array during the emission of an action potential by a spiking neuron, which can include the properties of other cells whose firing is correlated with this neuron. We calculated the electrical images of each identified RGC in the recording as described in the literature [13]. In a 30–60 minute recording, we typically detected 1,000–100,000 action potentials per RGC. For each cell, we averaged the voltages recorded over the entire array in a 1.5 ms window starting .25 ms before the peak negative voltage sample for each action potential. We cropped from the electrode array the subset of electrodes that falls within a 125 µm radius around the RGC soma (see Figure 1) in order to represent each EI by a 30 × 19 matrix (time points × number of electrodes in a 125 µm radius), or equivalently a 570 dimensional vector. We augment the training data by exploiting the symmetries of the (approximately) hexagonal grid of the electrode array. We form the training data EIs from original EIs, rotating them by iπ/3, i = 1, · · · , 6 and the reflection of each (12 spatial symmetries in total). The characteristic radius (125 µm here) used to select the central portion of the EI is a hyper-parameter of our method which controls the signal to noise ratio in the input data (see Section 5, Figure 3 middle panel). In the Appendix of this paper we describe 3 families (subdivided into 7 sub-families) of filters we manually built to capture anatomical features of the cell. In particular, we included filters corresponding to various action potential propagation velocities at level of the the axon and hard-coded a parameter which captures the soma size. These quantities are believed to be indicative of cell type. 2 Time to spike (ms) -0.25 0 0.25 0.5 0.75 0 0.5 1 1.5 Time (ms) Distance to soma, µm 0 300 600 120 µm Time to spike (ms) -0.25 0 0.25 0.5 0.75 120 µm 0 0.5 1 1.5 Time (ms) Distance to soma, µm 0 600 t = −0.2 ms t = −0.1 ms t = 0 ms t = 0.1 ms t = 0.2 ms t = 0.3 ms > 0 < 0 Figure 1: EIs and cell morphology. (Top row) Multielectrode arrays record a 2D projection of spatio-temporal action potentials, schematically illustrated here for a midget (left) and a parasol (right) RGC. Midget cells have an asymmetric dendritic field, while parasol cells are more isotropic. (Bottom row) Temporal evolution of the voltage recorded on the electrodes located within a 125 µmradius around the electrode where the largest action potential was detected, which we use for cell type classification. Amplitude of the circles materialize signal amplitude. Red circles — positive voltages, blue circles — negative voltages. We filtered the spatiotemporally aligned RGC electrical images with our hand defined filters to create a first feature set. In separate experiments we also filtered aligned EIs with iid Gaussian random filters (as many as our features) in the fashion of [17], see Table 1 to compare performances. 2.2 Interspike Intervals The statistics of the timing of action potential trains are another source of information about functional RGC types. Interspike intervals (ISIs) are an estimate of the probability of emission of two consecutive action potentials within a given time difference by a spiking neuron. We build histograms of the times elapsed between two consecutive action potentials for each cell to form its ISI. We estimate the interspike intervals over 100 ms, with a time granularity of 0.5 ms, resulting in 200 dimensional ISI vectors. ISIs always begin by a refractory period (i.e. a duration over which no action potentials occur, following a first action potential). This period lasts the first 1-2 ms. ISIs then increase before decaying back to zero at rates representative of the functional cell type (see Figure 2, left hand side). We describe each ISI using the values of time differences ∆t where the smoothed ISI reaches 20, 40, 60, 80, 100% of its maximum value as well as the slopes of the linear interpolations between each consecutive pair of points. 2.3 Cross-correlation functions and electrical coupling of cells There is in the retina a high probability of joint emission of action potentials between neighboring ganglion cells of the same type, while RGCs of antagonistic polarities (ON vs OFF cells) tend to exhibit strongly negatively correlated firing patterns [16, 10]. In other words, the emission of an action potential in the ON pathway leads to a reduced probability of observing an action potential in the OFF pathway at the same time. The cross-correlation function of two RGCs characterizes the probability of joint emission of action potentials for this pair of cells with a given latency, and as such holds information about functional coupling between the two cells. Cross-correlations between different functional RGC types have been studied extensively in the literature previously, for example in [10]. Construction of CCFs follows the same steps as ISI computation: we obtain the CCF of pairs of cells by building histograms of time differences between their consecutive firing times. A large CCF value near the origin is indicative of positive functional coupling whereas negative coupling corresponds to a negative CCF at the origin (see Figure 2, the three panels on the right). 3 0 50 100 150 200 0 0.005 0.01 0.015 0.02 0.025 Δt (ms) Frequency Interspike Intervals Off Parasol On Parasol On Midget Off Midget SBC −50 0 50 −0.4 −0.2 0 0.2 0.4 0.6 On−On Parasols Δt (ms) correlation −50 0 50 −0.4 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 On−Off Parasols Δt (ms) correlation −50 0 50 −0.3 −0.2 −0.1 0 0.1 0.2 0.3 0.4 0.5 Off−Off Parasols Δt (ms) correlation Figure 2: (Left panel) Interspike Intervals for the 5 major RGC types of the primate retina. (Right panels) Cross-correlation functions between parasol cells. Purple traces: single pairwise CCF. Red line: population average. Green arrow: strength of the correlation. 3 Learning electrical signatures of retinal ganglion cells 3.1 Learning dictionaries from slices of unlabeled data Learning descriptors from unlabeled data, or dictionary learning [15], has been successfully used for classification tasks in high-dimensional data such as images, speech and texts [15, 4]. The methodology we used for learning discriminative features given a relatively large amount of unlabeled data follows closely the steps described in [4, 5]. Extracting independent slices from the data. The first step in our approach consists of extracting independent (as much as possible) slices from data points. One can think of a slice as a subset of the descriptors that is (nearly) independent from other subsets. In image processing the analogue object is named a patch, i.e. a small sub-image. In our case, we used 8 data slices. The ISI descriptors form one such slice, the others are extracted from EIs. It is reasonable to assume ISI features and EI descriptors are independent quantities. After aligning the EIs and filtering them with a collection of 7 filter banks (see Appendix for a description of our biologically motivated filters), we group each set of filtered EIs. Each group of filters reacts to specific patterns in EIs: rotational motion driven by dendrites, radial propagation of the electrical signal along the axon and the direction of propagation constitute behaviors captured by distinct filter banks. Thereby, we treat the response of data to each one of them as a unique data slice. Each slice is then whitened [5], and finally we perform sparse k-means on each slice separately, k denotes an integer which is a parameter of our algorithm. That is, letting X ∈Rn×p denote a slice of data (n: number of data points and p: dimensionality of the slice) and Cn,k denote the set of cluster assignment matrices: Cn,k = {U ∈{0, 1}n×k : ∀i ∈[n] , ∥Ui,·∥0 = 1}, we consider the optimization problem min ∥X −UVT∥2 F + η∥V∥1 s.th. U ∈Cn,k , V ∈Rp×k . (1) Warm-starting k-means with warm started NMF. In order to solve the optimization problem (1), we propose a coarse-to-fine strategy that consists in relaxing the constraint U ∈Cn,k in two steps. We initially relax the constraint U ∈Cn,k completely and set η = 0. That is, we consider problem (1) where we substitute Cn,k with the larger set Rn×k and run an alternate minimization for a few steps. Then, we replace the clustering constraint Cn,k with a nonnegativity constraint U ∈ Rn×k + while retaining η = 0. After a few steps of nonnegative alternate minimization we activate the constraint U ∈Cn,k and finally raise the value of η. This warm-start strategy systematically resulted in lower values of the objective compared to random or k-means++ [1] initializations. 3.2 Building feature vectors for labeled data In order to extract feature vectors from labeled data we first extract slice each data point: we extract ISI features on the one hand and filter each data point with all filter families. Each slice is separately whitened and compared to the cluster centers of its slice. For this, we use the matrices V(j) of cluster centroids computed for the all slices j = 1, · · · , 8. Letting s(·, κ) denote the soft thresholding operator s(x, κ) = (sign(xi) max{|xi|−κ, 0})i, we compute ˜x(j) = s(V(j)Tx(j), κ) for each slice, which is the soft-thresholded inner products of the corresponding slice of the data point x(j) with all cluster centroids for the same slice j. We concatenate the ˜x(j)s from different slices and use 4 the resulting encoded point to predict cell types: ˜x = (˜x(j))j. The last step is performed either by feeding concatenated vectors ˜x together with the corresponding label to a logistic regression classifier which handles multiple classes in a one-versus-all fashion, or to a random forest classifier. 4 Predicting cell polarities by completing the RGC coupling matrix We additionally exploit pairwise spike train cross-correlations to infer RGC polarities (ON vs OFF) and estimate the polarity vector y by using a measure of the pairwise functional coupling strength between cells. The rationale behind this approach is that neighboring cells of the same polarity will tend to exhibit positive correlations between their action potential spike trains, corresponding to positive functional coupling. If the cells are of antagonistic polarities, functional coupling strength will be negative. The coupling of two neighboring cells i, j can therefore be modeled as c{i,j} ≃ yiyj, where yi, yj ∈{+1, −1} denote cell polarities. Because far apart cells do not excite or inhibit each other, to avoid incorporating noise in our model, we choose to only include estimates of functional coupling strengths between neighboring cells. The neighborhood size is a hyperparameter of this approach that we study in Section 5. If G denotes the graph of neighboring cells in a recording, we only use cross-correlations for spike trains of cells which are connected with an edge in G. Since we can estimate the position of each RGC in the lattice from its EI [13], we therefore can form the graph G, which is a 2-dimensional regular geometric graph. If q is the number of edges in G, let P denote the linear map Rn×n →Rq returning the values P(C) = (Ci,j){i,j}∈E(G) for cells i and j located within a critical distance. We use P∗to denote the adjoint (transpose) operator. The complete matrix of pairwise couplings can then be written — up to observation noise — as yyT, where y ∈{+1, −1}n is the vector of cell polarities (+1 for ON and −1 for OFF cells). Therefore, the observation can be modeled as: c = P(yyT) + ε with ε observation noise. (2) and the recovery of yyT is then formulated as a standard matrix completion problem. 4.1 Minimizing the nonconvex loss using warm-started Newton steps In this section, we show how to estimate y given the observation of c = P(yyT) + ε by minimizing the non-convex loss ℓ(z) = 1 2∥P(zzT)−c∥2 2. Even though minimizing this degree 4 polynomial loss function is NP-hard in general, we propose a Newton method warm-started with a spectral heuristic for approaching the solution (see Algorithm 1). In similar contexts, when the sampling of entries is uniform, this type of spectral initialization followed by alternate minimization has been proven to converge to the global minimum of a least-squared loss, analogous to ℓ[12]. While our sampling graph G is not an Erdos-Renyi graph, we empirically observed that its regular structure enables us to come up with a reliable initial spectral guess that falls within the basin of attraction of the global minimum of ℓ. In the subsequent Newton scheme, we iterate using the shifted Hessian matrix H(z) = P∗ 2 P(zzT) −c +νIn where ν > 0 ensures positive definiteness H(z) ≻0. Whenever computing ν and H(z)−1 is expensive due to a potentially large number of cells n, then replacing H(z)−1 by a diagonal or scalar approximation α/∥z∥2 2 reduces per iteration cost while resulting in a slower convergence. We refer to this method as a first-order method for minimizing the nonconvex objective, while ISTA [2] is a first order method applied to the convex relaxation of the problem as presented in the Appendix (see Figure 4, middle panel). Using the same convex relaxation we prove in the Appendix that the proposed estimator has a classification accuracy of at least 1 −b∥ε∥2 ∞with b ≈2.91. Algorithm 1 Polarity matrix completion Require: c observed couplings, P the projection operator Let λ, v be the leading eigenpair of P∗(c) Initialize z0 ←n √ λ v/ p |#revealed entries| for t = 0, 1, · · · do zt+1 ←zt −H−1(zt)P∗ P(ztzT t ) −c zt \\ H(zt) is the Hessian or an approximation end for 5 Input EI & ISI EI & ISI EI & ISI EI only our filters rand. filters rand. filters our filters ISI only CCF Task k = 30 k = 50 k = 10 k = 30 T 93.5 % (1.1 ) 88.3 % (1.9) 93.1 % (1.3) 86.0 % (2.6) 80.6 % (2.6) – P 81.5 % (3.0) 80.0 % (2.3) 77.8 % (2.3) 64.1 % (3.7) 76.8 % (3.8) 75.7 % (4.9) T+P 78.0 % ( 3.3) 66.7 % (1.9) 72.0 % (1.7) 60.4 % (2.9) 64.7 % (2.9) – Table 1: Comparing performance for input data sources and filters. T: cell type identification. P: polarity identification. T+P: cell type and polarity identification. EIs cropped within 125 µm from the central electrode. 5 Numerical experiments In this section, we benchmark the performance of the cell type classifiers introduced previously on datasets where the ground truth was available. For the RGCs in those datasets, experts manually hand-labeled the light response properties of the cells in the manner previously described in the literature [9, 3]. Our unlabeled data contained 17,457 × 12 (spatial symmetries) data points. The labeled data consists of 436 OFF midget, 652 OFF parasol, 964 ON midget, 607 ON parasol and 169 small bistratified cells assembled from 10 distinct recordings. RGC classification from their electrical features. Our numerical experiment consists in hiding one out of 10 labeled recordings, learning cell classifiers on the 9 others and testing the classifier on the hidden recording. We chose to test the performance of the classifier against individual recordings for two reasons. Firstly, we wanted to compare the polarity prediction accuracy from electrical features with the prediction made by matrix completion (see Section 4) and the matrix completion algorithm takes as input pairwise data obtained from a single recording only. Secondly, experimental parameters likely to influence the EIs and ISIs such as recording temperature vary from recording to recording, but remain consistent within a recording. Since we want the reported scores to reflect expected performance against new recordings, not including points from the test distribution gives us a more realistic proxy to the true test error. In Table 1 we report classification accuracies on 3 different classification tasks: 1. Cell type identification (T): midget vs. parasol vs. small bistratified cells; 2. Polarity identification (P): ON versus OFF cells; 3. Cell type and polarity (T+P): ON-midget vs. ON-parasol vs. OFF-midget vs. OFF-parasol vs. small bistratified. Each row of the table contains the data used as input. The first column represents the results for the method where the dictionary learning step is performed with k = 30, and EIs are recorded within a radius of 125 µm from the central electrode (19 electrodes on our array). We compare our method with an identical method where we replaced the hand-specified filters by the random Gaussian filters of [17] (second column for k = 50 and third for k = 10). The performance of random filters opens perspectives for learning deeper predictors using random filters in the first layer. The impact of k on our filters can be seen in Figure 3, left-hand panel: larger k seems to bring further information for polarity prediction but not for cell type classification, which leads to an optimal choice k ≃20 in the 5-class problem. In the 4th and 5th columns, we used only part of the features sets at our disposal, EIs only and ISIs only respectively. These results confirm that the joint use of both EIs and ISIs for cell classification is beneficial. Globally, cell type identification turns out to be an easier task than polarity prediction using per cell descriptors. Figure 3 middle panel illustrates the impact of EI diameter on classification accuracy. While a larger recording radius lets us make use of more signal, the amount of noise incorporated also increases with the number of electrodes taken into account and we observe a trade-off in terms of signal to noise ratio on all three tasks. An interesting observation is the second jump in the accuracy of cell-type prediction around an EI diameter of 325µm, at which point we attain a peak performance of 96.8% ± 1.0. We believe this jump takes place when axonal signals start being incorporated in the EI, and we believe these signals to be a strong indicator of cell type because of 6 Dictionary size (k) Accuracy (%) 5 100 200 250 300 350 150 10 15 20 30 35 40 45 50 25 70 75 80 85 90 95 100 70 75 80 85 90 95 100 Electrical image radius (µm) Maximum cell distance (µm) 100 200 250 300 150 70 50 80 60 90 100 Figure 3: (Left panel) Effect of the dictionary size k and (Middle panel) EIs radius on per cell classification. (Right panel) Effect of the neighborhood size on polarity prediction using matrix completion. Cell index P*(c): observed couplings Cell index 10 20 30 50 60 70 80 100 110 90 40 20 40 60 80 100 1 0.8 0.6 0.2 0 -0.2 -0.4 -0.8 -1 -0.6 0.4 Coupling strength (a.u.) Iteration (t) loss (ut) − OPT Newton First order Convex (ISTA) PCA 100 100 102 10-4 10-2 10-6 101 102 loss (t) - OPT SP-MNF SP NMF ++ RND Iteration (t) 100 100 10-10 10-4 10-2 10-6 10-8 101 102 103 Figure 4: (Left panel) Observed coupling matrix. (Middle panel) Convergence of matrix completion algorithms. (Right panel) k-means with our initialization (SP-NMF) versus other choices. known differences in axonal conduction velocities [13]. Prediction variance is also relatively low for cell-type prediction compared to polarity prediction, while predicting polarity turns out to be significantly easier on some datasets than others. On average, the logistic regression classifier we used performed slightly better (∼+1%) than random forests on the various tasks and data sets at our disposal. Matrix completion based polarity prediction. Matrix completion resulted in > 90% accuracy on three out of 10 datasets and in an average of 66.8% accuracy in the 7 other datasets. We report the average performance in Table 1 even though it is inferior to the simpler classification approach for two reasons: (a) the idea of using matrix completion for this task is new and (b) it has a high potential, as demonstrated by Figure 3, right hand panel. On some datasets, matrix completion has 100%accuracy. However, on other datasets, either because of issues a fragile spike-sorting, or of other noise, the approach does not do as well. In Figure 3 (right hand side) we examine the effect of the neighborhood size on prediction accuracy. Colors correspond to different datasets. For sake of readability, we only show the results for 4 out of 10 datasets: the best, the worse and 2 intermediary. The sensitivity to maximum cell distance is clear on this plot. Bold curves correspond to the prediction resulting after 100 steps of our Newton algorithm. Dashed curves correspond to predictions by the first order (nonconvex) method stopped after 100 steps, and dots are prediction accuracies of the leading singular vector, i.e. the spectral initialization of our algorithm. Overall, the Newton algorithm seems to perform better than its rivals, and there appears to be an optimal radius to choose for each dataset which corresponds to the characteristic distance between pairs of cells (here only Parasols). This parameter varies from dataset to dataset and hence requires parameter tuning before extracting CCF data in order to get the best performance out of the algorithm. Warm-start strategy for dictionary learning. We refer to Figure 4, right hand panel for an illustration of our warm-start strategy for minimizing (1) as described in Section 3.1. There, we compare dense (η = 0) k-means initialized with our double-warm start (25 steps of unconstrained alternate minimization and 25 steps of nonnegative alternate minimization, referred to as SP-NMF), with a single spectral warm start 50 steps unconstrained alternate minimization initialization (SP) and a 50 steps nonnegative alternate minimization (NMF) as well as with two standard baselines which 7 are random initialization and k-means++ initializations [1]. We postpone theoretical study of this initialization choice to future work. Note that each step of the alternate minimization involves a few matrix-matrix products and element-wise operations on matrices. Using a NVIDIA Tesla K40 GPU drastically accelerated these steps, allowing us to scale up our experiments. 6 Discussion We developed accurate cell-type classifiers using a unique collection of labeled and unlabeled electrical recordings and employing recent advances in several areas of machine learning. The results show strong empirical success of the methodology, which is highly scalable and adapted for major applications discussed below. Matrix completion for binary classification is novel, and the two heuristics we used for minimizing our non-convex objectives show convincing superiority to existing baselines. Future work will be dedicated to studying properties of these algorithms. Recording Methods. Three major aspects of electrical recordings are critical for successful cell type identification from electrical signatures. First, high spatial resolution is required to detect the fine features of the EIs; much more widely spaced electrode arrays such as those often used in the cortex may not perform as well. Second, high temporal resolution is required to measure the ISI accurately; this suggests that optical measurements using Ca++ sensors would not be as useful as electrical measurements. Third, large-scale recordings are required to detect many pairs of cells and estimate their functional interactions; electrode arrays with fewer channels may not suffice. Thus, large-scale, high-density electrophysiological recordings are uniquely well suited to the task of identifying cell types. Future directions. A probable source of variability in cell type classification is differences between retinal preparations, including eccentricity in the retina, inter-animal variability, and experimental variables such as temperature and signal-to-noise of the recording. In the present data, features were defined and assembled across a dozen different recordings. This motivates transfer learning to account for such variability, exploiting the fact that although the features may change somewhat between preparations (target domains), the underlying cell types and the fundamental differences in electrical signatures are expected to remain. We expect future work to result in models that enjoy higher complexity thanks to training on larger datasets, thus achieving invariance to ambient conditions (eccentricity and temperature) automatically. The model we used can be interpreted as a single-layer neural network. A straightforward development would be to increase the number of layers. The relative success of random filters on the first layer is a sign that one can hope to get further automated improvement by building richer representations from the data itself and with minimum incorporation of prior knowledge. Application. Two major applications are envisioned. First, an extensive set of large-scale, highdensity recordings from primate retina can now be mined for information on infrequently-recorded cell types. Manual identification of cell types using their light response properties is extremely labor-intensive, however, the present approach promises to facilitate automated mining. Second, the identification of cell types without light responses is fundamental for the development of highresolution retinal prostheses of the future [11]. In such devices, it is necessary to identify which electrodes are capable of stimulating which cells, and drive spiking in RGCs according to their type in order to deliver a meaningful visual signal to the brain. For this futuristic brain-machine interface application, our results solve a fundamental problem. Finally, it is hoped that these applications in the retina will also be relevant for other brain areas, where identification of neural cell types and customized electrical stimulation for high-resolution neural implants may be equally important in the future. Acknowledgement We are grateful to A. Montanari and D. Palanker for inspiring discussions and valuable comments, and C. Rhoades for labeling the data. ER acknowledges support from grants AFOSR/DARPA FA9550-12-1-0411 and FA9550-13-1-0036. We thank the Stanford Data Science Initiative for financial support and NVIDIA Corporation for the donation of the Tesla K40 GPU we used. Data collection was supported by National Eye Institute grants EY017992 and EY018003 (EJC). Please contact EJC (ej@stanford.edu) for access to the data. 8 References [1] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Society for Industrial and Applied Mathematics, editors, Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, 2007. [2] M. Beck, A.and Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal of Imaging Sciences, 2(1):183–202, 2009. [3] E. J. Chichilnisky and Rachel S. Kalmar. Functional asymmetries in on and off ganglion cells of primate retina. The Journal of Neuroscience, 22(7):2737–2747, 2002. [4] A. Coates and A. Y. Ng. The importance of encoding versus training with sparse coding and vector quantization. In International Conference in Machine Learning (ICML), volume 28, 2011. [5] A. Coates, A. Y. Ng, and H. Lee. An analysis of single-layer networks in unsupervised feature learning. In International Conference on Artificial Intelligence and Statistics (AISTATS), pages 215–223, 2011. [6] D. M. Dacey. The Cognitive Neurosciences, book section Origins of perception: retinal ganglion cell diversity and the creation of parallel visual pathways, pages 281–301. MIT Press, 2004. [7] D. M. Dacey and O S Packer. Colour coding in the primate retina: diverse cell types and cone-specific circuitry. Curr Opin Neurobiol, 13:421–427, 2003. [8] D. M. Dacey and M. R. Petersen. Dendritic field size and morphology of midget and parasol cells of the human retina. PNAS, 89:9666–9670, 1992. [9] Steven H. Devries and Denis A. Baylor. Mosaic arrangement of ganglion cell receptive fields in rabbit retina. Journal of Neurophysiology, 78(4):2048–2060, 1997. [10] M. Greschner, J. Shlens, C. Bakolista, G. D. Field, J. L. Gauthier, L. H. Jepson, A. Sher, A. M. Litke, and E. J. Chichilnisky. Correlated firing among major ganglion cell types in primate retina. J Physiol, 589:75–86, 2011. [11] L. H. Jepson, P. Hottowy, G. A. Wiener, W. Dabrowski, A. M. Litke, and E. J. Chichilnisky. High-fidelity reproduction of spatiotemporal visual signals for retinal prosthesis. Neuron, 83:87–92, 2014. [12] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):2980–2998, 2010. [13] P. H. Li, J. L. Gauthier, M. Schiff, A. Sher, D. Ahn, G. D. Field, M. Greschner, E. M. Callaway, A. M. Litke, and E. J. Chichilnisky. Anatomical identification of extracellularly recorded cells in large-scale multielectrode recordings. J Neurosci, 35(11):4663–75, 2015. [14] A. M. Litke, N. Bezayiff, E. J. Chichilnisky, W. Cunningham, W. Dabrowski, A. A. Grillo, M. I. Grivich, P. Grybos, P. Hottowy, S. Kachiguine, R. S. Kalmar, K. Mathieson, D. Petrusca, M. Rahman, and A. Sher. What does the eye tell the brain? development of a system for the large-scale recording of retinal output activity. IEEE Trans. on Nuclear Science, 51(4):1434– 1440, 2004. [15] J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding. In International Conference on Machine Learning (ICML), pages 689–696, 2009. [16] D. N. Mastronarde. Correlated firing of cat retinal ganglion cells. i. spontaneously active inputs to x- and y-cells. J Neurophysiol, 49(2):303–324, 1983. [17] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Advances in neural information processing systems (NIPS), pages 1177–1184, 2007. [18] L. C. L. Silveira and V.H. Perry. The topography of magnocellular projecting ganglion cells (m-ganglion cells) in the primate retina. Neuroscience, 40(1):217–237, 1991. 9 | 2015 | 403 |
5,929 | Maximum Likelihood Learning With Arbitrary Treewidth via Fast-Mixing Parameter Sets Justin Domke NICTA, Australian National University justin.domke@nicta.com.au Abstract Inference is typically intractable in high-treewidth undirected graphical models, making maximum likelihood learning a challenge. One way to overcome this is to restrict parameters to a tractable set, most typically the set of tree-structured parameters. This paper explores an alternative notion of a tractable set, namely a set of “fast-mixing parameters” where Markov chain Monte Carlo (MCMC) inference can be guaranteed to quickly converge to the stationary distribution. While it is common in practice to approximate the likelihood gradient using samples obtained from MCMC, such procedures lack theoretical guarantees. This paper proves that for any exponential family with bounded sufficient statistics, (not just graphical models) when parameters are constrained to a fast-mixing set, gradient descent with gradients approximated by sampling will approximate the maximum likelihood solution inside the set with high-probability. When unregularized, to find a solution ϵ-accurate in log-likelihood requires a total amount of effort cubic in 1/ϵ, disregarding logarithmic factors. When ridge-regularized, strong convexity allows a solution ϵ-accurate in parameter distance with effort quadratic in 1/ϵ. Both of these provide of a fully-polynomial time randomized approximation scheme. 1 Introduction In undirected graphical models, maximum likelihood learning is intractable in general. For example, Jerrum and Sinclair [1993] show that evaluation of the partition function (which can easily be computed from the likelihood) for an Ising model is #P-complete, and that even the existence of a fully-polynomial time randomized approximation scheme (FPRAS) for the partition function would imply that RP = NP. If the model is well-specified (meaning that the target distribution falls in the assumed family) then there exist several methods that can efficiently recover correct parameters, among them the pseudolikelihood [3], score matching [16, 22], composite likelihoods [20, 30], Mizrahi et al.’s [2014] method based on parallel learning in local clusters of nodes and Abbeel et al.’s [2006] method based on matching local probabilities. While often useful, these methods have some drawbacks. First, these methods typically have inferior sample complexity to the likelihood. Second, these all assume a well-specified model. If the target distribution is not in the assumed class, the maximum-likelihood solution will converge to the M-projection (minimum of the KL-divergence), but these estimators do not have similar guarantees. Third, even when these methods succeed, they typically yield a distribution in which inference is still intractable, and so it may be infeasible to actually make use of the learned distribution. Given these issues, a natural approach is to restrict the graphical model parameters to a tractable set Θ, in which learning and inference can be performed efficiently. The gradient of the likelihood is determined by the marginal distributions, whose difficulty is typically determined by the treewidth of the graph. Thus, probably the most natural tractable family is the set of tree-structured distributions, 1 where Θ = {θ : ∃tree T, ∀(i, j) ̸∈T, θij = 0}. The Chow-Liu algorithm [1968] provides an efficient method for finding the maximum likelihood parameter vector θ in this set, by computing the mutual information of all empirical pairwise marginals, and finding the maximum spanning tree. Similarly, Heinemann and Globerson [2014] give a method to efficiently learn high-girth models where correlation decay limits the error of approximate inference, though this will not converge to the M-projection when the model is mis-specified. This paper considers a fundamentally different notion of tractability, namely a guarantee that Markov chain Monte Carlo (MCMC) sampling will quickly converge to the stationary distribution. Our fundamental result is that if Θ is such a set, and one can project onto Θ, then there exists a FPRAS for the maximum likelihood solution inside Θ. While inspired by graphical models, this result works entirely in the exponential family framework, and applies generally to any exponential family with bounded sufficient statistics. The existence of a FPRAS is established by analyzing a common existing strategy for maximum likelihood learning of exponential families, namely gradient descent where MCMC is used to generate samples and approximate the gradient. It is natural to conjecture that, if the Markov chain is fast mixing, is run long enough, and enough gradient descent iterations are used, this will converge to nearly the optimum of the likelihood inside Θ, with high probability. This paper shows that this is indeed the case. A separate analysis is used for the ridge-regularized case (using strong convexity) and the unregularized case (which is merely convex). 2 Setup Though notation is introduced when first used, the most important symbols are given here for more reference. • θ - parameter vector to be learned • Mθ - Markov chain operator corresponding to θ • θk - estimated parameter vector at k-th gradient descent iteration • qk = Mv θk−1r - approximate distribution sampled from at iteration k. (v iterations of the Markov chain corresponding to θk−1 from arbitrary starting distribution r.) • Θ - constraint set for θ • f - negative log-likelihood on training data • L - Lipschitz constant for the gradient of f. • θ∗= arg minθ∈Θ f(θ) - minimizer of likelihood inside of Θ • K - total number of gradient descent steps • M - total number of samples drawn via MCMC • N - length of vector x. • v - number of Markov chain transitions applied for each sample • C, α - parameters determining the mixing rate of the Markov chain. (Equation 3) • Ra - sufficient statistics norm bound. • ϵf - desired optimization accuracy for f • ϵθ - desired optimization accuracy for θ • δ - permitted probability of failure to achieve a given approximation accuracy This paper is concerned with an exponential family of the form pθ(x) = exp(θ · t(x) −A(θ)), where t(x) is a vector of sufficient statistics, and the log-partition function A(θ) ensures normalization. An undirected model can be seen as an exponential family where t consists of indicator functions for each possible configuration of each clique [32]. While such graphical models motivate this work, the results are most naturally stated in terms of an exponential family and apply more generally. 2 • Initialize θ0 = 0. • For k = 1, 2, ..., K – Draw samples. For i = 1, ..., M, sample xk−1 i ∼qk−1 := Mv θk−1r. – Estimate the gradient as f ′(θk−1) + ek ←1 M M ! i=1 t(xk−1 i ) −¯t + λθ. – Update the parameter vector as θk ←ΠΘ " θk−1 −1 L (f ′(θk−1) + ek)) # . • Output θK or 1 K $K k=1 θk. f (θ) Θ θ0 θ ∗ Figure 1: Left: Algorithm 1, approximate gradient descent with gradients approximated via MCMC, analyzed in this paper. Right: A cartoon of the desired performance, stochastically finding a solution near θ∗, the minimum of the regularized negative log-likelihood f(θ) in the set Θ. We are interested in performing maximum-likelihood learning, i.e. minimizing, for a dataset z1, ..., zD, f(θ) = −1 D D ! i=1 log pθ(zi) + λ 2 ∥θ∥2 2 = A(θ) −θ · ¯t + λ 2 ∥θ∥2 2, (1) where we define ¯t = 1 D $D i=1 t(zi). It is easy to see that the gradient of f takes the form f ′(θ) = Epθ[t(X)] −¯t + λθ. If one would like to optimize f using a gradient-based method, computing the expectation of t(X) with respect to pθ can present a computational challenge. With discrete graphical models, the expected value of t is determined by the marginal distributions of each factor in the graph. Typically, the computational difficulty of computing these marginal distributions is determined by the treewidth of the graph– if the graph is a tree, (or close to a tree) the marginals can be computed by the junction-tree algorithm [18]. One option, with high treewidth, is to approximate the marginals with a variational method. This can be seen as exactly optimizing a “surrogate likelihood” approximation of Eq. 1 [31]. Another common approach is to use Markov chain Monte Carlo (MCMC) to compute a sample {xi}M i=1 from a distribution close to pθ, and then approximate Epθ[t(X)] by (1/M) $M i=1 t(xi). This strategy is widely used, varying in the model type, the sampling algorithm, how samples are initialized, the details of optimization, and so on [10, 25, 27, 24, 7, 33, 11, 2, 29, 5]. Recently, Steinhardt and Liang [28] proposed learning in terms of the stationary distribution obtained from a chain with a nonzero restart probability, which is fast-mixing by design. While popular, such strategies generally lack theoretical guarantees. If one were able to exactly sample from pθ, this could be understood simply as stochastic gradient descent. But, with MCMC, one can only sample from a distribution approximating pθ, meaning the gradient estimate is not only noisy, but also biased. In general, one can ask how should the step size, number of iterations, number of samples, and number of Markov chain transitions be set to achieve a convergence level. The gradient descent strategy analyzed in this paper, in which one updates a parameter vector θk using approximate gradients is outlined and shown as a cartoon in Figure 1. Here, and in the rest of the paper, we use pk as a shorthand for pθk, and we let ek denote the difference between the estimated gradient and the true gradient f ′(θk−1). The projection operator is defined by ΠΘ[φ] = arg minθ∈Θ ||θ −φ||2. We assume that the parameter set θ is constrained to a set Θ such that MCMC is guaranteed to mix at a certain rate (Section 3.1). With convexity, this assumption can bound the mean and variance 3 of the errors at each iteration, leading to a bound on the sum of errors. With strong convexity, the error of the gradient at each iteration is bounded with high probability. Then, using results due to [26] for projected gradient descent with errors in the gradient, we show a schedule the number of iterations K, the number of samples M, and the number of Markov transitions v such that with high probability, f ! 1 K K " k=1 θk # −f (θ∗) ≤ϵf or ∥θK −θ∗∥2 ≤ϵθ, for the convex or strongly convex cases, respectively, where θ∗∈arg minθ∈Θ f(θ). The total number of Markov transitions applied through the entire algorithm, KMv grows as (1/ϵf)3 log(1/ϵf) for the convex case, (1/ϵ2 θ) log(1/ϵ2 θ) for the strongly convex case, and polynomially in all other parameters of the problem. 3 Background 3.1 Mixing times and Fast-Mixing Parameter Sets This Section discusses some background on mixing times for MCMC. Typically, mixing times are defined in terms of the total-variation distance ∥p−q∥T V = maxA |p(A)−q(A)|, where the maximum ranges over the sample space. For discrete distributions, this can be shown to be equivalent to ∥p −q∥T V = 1 2 $ x |p(x) −q(x)|. We assume that a sampling algorithm is known, a single iteration of which can be thought of an operator Mθ that transforms some starting distribution into another. The stationary distribution is pθ, i.e. limv→∞Mv θq = pθ for all q. Informally, a Markov chain will be fast mixing if the total variation distance between the starting distribution and the stationary distribution decays rapidly in the length of the chain. This paper assumes that a convex set Θ and constants C and α are known such that for all θ ∈Θ and all distributions q, ∥Mv θq −pθ∥T V ≤Cαv. (2) This means that the distance between an arbitrary starting distribution q and the stationary distribution pθ decays geometrically in terms of the number of Markov iterations v. This assumption is justified by the Convergence Theorem [19, Theorem 4.9], which states that if M is irreducible and aperiodic with stationary distribution p, then there exists constants α ∈(0, 1) and C > 0 such that d(v) := sup q ∥Mvq −p∥T V ≤Cαv. (3) Many results on mixing times in the literature, however, are stated in a less direct form. Given a constant ϵ, the mixing time is defined by τ(ϵ) = min{v : d(v) ≤ϵ}. It often happens that bounds on mixing times are stated as something like τ(ϵ) ≤ % a + b ln 1 ϵ & for some constants a and b. It follows from this that ∥Mvq −p∥T V ≤Cαv with C = exp(a/b) and α = exp(−1/b). A simple example of a fast-mixing exponential family is the Ising model, defined for x ∈ {−1, +1}N as p(x|θ) = exp ⎛ ⎝ " (i,j)∈Pairs θijxixj + " i θixi −A(θ) ⎞ ⎠. A simple result for this model is that, if the maximum degree of any node is ∆and |θij| ≤β for all (i, j), then for univariate Gibbs sampling with random updates, τ(ϵ) ≤⌈N log(N/ϵ) 1−∆tanh(β)⌉[19]. The algorithm discussed in this paper needs the ability to project some parameter vector φ onto Θ to find arg minθ∈Θ ||θ−φ||2. Projecting a set of arbitrary parameters onto this set of fast-mixing parameters is trivial– simply set θij = β for θij > β and θij ←−β for θij < −β. For more dense graphs, it is known [12, 9] that, for a matrix norm ∥·∥that is the spectral norm ∥·∥2, or induced 1 or infinity norms, τ(ϵ) ≤ +N log(N/ϵ) 1 −∥R(θ)∥ , (4) 4 where Rij(θ) = |θij|. Domke and Liu [2013] show how to perform this projection for the Ising model when ∥· ∥is the spectral norm ∥· ∥2 with a convex optimization utilizing the singular value decomposition in each iteration. Loosely speaking, the above result shows that univariate Gibbs sampling on the Ising model is fastmixing, as long as the interaction strengths are not too strong. Conversely, Jerrum and Sinclair [1993] exhibited an alternative Markov chain for the Ising model that is rapidly mixing for arbitrary interaction strengths, provided the model is ferromagnetic, i.e. that all interaction strengths are positive with θij ≥0 and that the field is unidirectional. This Markov chain is based on sampling in different “subgraphs world” state-space. Nevertheless, it can be used to estimate derivatives of the Ising model log-partition function with respect to parameters, which allows estimation of the gradient of the log-likelihood. Huber [2012] provided a simulation reduction to obtain an Ising model sample from a subgraphs world sample. More generally, Liu and Domke [2014] consider a pairwise Markov random field, defined as p(x|θ) = exp ⎛ ⎝# i,j θij(xi, xj) + # i θi(xi) −A(θ) ⎞ ⎠, and show that, if one defines Rij(θ) = maxa,b,c 1 2|θij(a, b)−θij(a, c)|, then again Equation 4 holds. An algorithm for projecting onto the set Θ = {θ : ∥R(θ)∥≤c} exists. There are many other mixing-time bounds for different algorithms, and different types of models [19]. The most common algorithms are univariate Gibbs sampling (often called Glauber dynamics in the mixing time literature) and Swendsen-Wang sampling. The Ising model and Potts models are the most common distributions studied, either with a grid or fully-connected graph structure. Often, the motivation for studying these systems is to understand physical systems, or to mathematically characterize phase-transitions in mixing time that occur as interactions strengths vary. As such, many existing bounds assume uniform interaction strengths. For all these reasons, these bounds typically require some adaptation for a learning setting. 4 Main Results 4.1 Lipschitz Gradient For lack of space, detailed proofs are postponed to the appendix. However, informal proof sketches are provided to give some intuition for results that have longer proofs. Our first main result is that the regularized log-likelihood has a Lipschitz gradient. Theorem 1. The regularized log-likelihood gradient is L-Lipschitz with L = 4R2 2 + λ, i.e. ∥f ′(θ) −f ′(φ)∥2 ≤(4R2 2 + λ)∥θ −φ∥2. Proof sketch. It is easy, by the triangle inequality, that ∥f ′(θ)−f ′(φ)∥2 ≤∥dA dθ −dA dφ ∥2+λ∥θ−φ∥2. Next, using the assumption that ∥t(x)∥2 ≤R2, one can bound that ∥dA dθ −dA dφ ∥2 ≤2R2∥pθ−pφ∥T V . Finally, some effort can bound that ∥pθ −pφ∥T V ≤2R2∥θ −φ∥2. 4.2 Convex convergence Now, our first major result is a guarantee on the convergence that is true both in the regularized case where λ > 0 and the unregularized case where λ = 0. Theorem 2. With probability at least 1 −δ, at long as M ≥3K/ log( 1 δ ), Algorithm 1 will satisfy f & 1 K K # k=1 θk ' −f(θ∗) ≤8R2 2 KL (L∥θ0 −θ∗∥2 4R2 + log 1 δ + K √ M + KCαv )2 . Proof sketch. First, note that f is convex, since the Hessian of f is the covariance of t(X) when λ = 0 and λ > 0 only adds a quadratic. Now, define the quantity dk = 1 M *M m=1 t(Xk m) − 5 Eqk[t(X)] to be the difference between the estimated expected value of t(X) under qk and the true value. An elementary argument can bound the expected value of ∥dk∥, while the Efron-Stein inequality can bounds its variance. Using both of these bounds in Bernstein’s inequality can then show that, with probability 1 −δ, !K k=1 ∥dk∥≤2R2(K/ √ M + log 1 δ ). Finally, we can observe that !K k=1 ∥ek∥≤!K k=1 ∥dk∥+ !K k=1 ∥Eqk[t(X)]−Epθk [t(X)]∥2. By the assumption on mixing speed, the last term is bounded by 2KR2Cαv. And so, with probability 1 −δ, !K k=1 ∥ek∥≤ 2R2(K/ √ M + log 1 δ ) + 2KR2Cαv. Finally, a result due to Schmidt et al. [26] on the convergence of gradient descent with errors in estimated gradients gives the result. Intuitively, this result has the right character. If M grows on the order of K2 and v grows on the order of log K/(−log α), then all terms inside the quadratic will be held constant, and so if we set K of the order 1/ϵ, the sub-optimality will on the order of ϵ with a total computational effort roughly on the order of (1/ϵ)3 log(1/ϵ). The following results pursue this more carefully. Firstly, one can observe that a minimum amount of work must be performed. Theorem 3. For a, b, c, α > 0, if K, M, v > 0 are set so that 1 K (a + b K √ M + Kcαv)2 ≤ϵ, then KMv ≥a4b2 ϵ3 log ac ϵ (−log α). Since it must be true that a/ √ K + b " K/M + √ Kcαv ≤√ϵ, each of these three terms must also be at most √ϵ, giving lower-bounds on K, M, and v. Multiplying these gives the result. Next, an explicit schedule for K, M, and v is possible, in terms of a convex set of parameters β1, β2, β3. Comparing this to the lower-bound above shows that this is not too far from optimal. Theorem 4. Suppose that a, b, c, α > 0. If β1 + β2 + β3 = 1, β1, β2, β3 > 0, then setting K = a2 β2 1ϵ, M = ( ab β1β2ϵ)2, v = log ac β1β3ϵ/(−log α) is sufficient to guarantee that 1 K (a + b K √ M + Kcαv)2 ≤ϵ with a total work of KMv = 1 β4 1β2 2 a4b2 ϵ3 log ac β1β3ϵ (−log α). Simply verify that the ϵ bound holds, and multiply the terms together. For example, setting β1 = 0.66, β2 = 0.33 and β3 = 0.01 gives that KMv ≈48.4 a4b2 ϵ3 log ac ϵ +5.03 (−log α) . Finally, we can give an explicit schedule for K, M, and v, and bound the total amount of work that needs to be performed. Theorem 5. If D ≥max # ∥θ0 −θ∗∥2, 4R2 L log 1 δ $ , then for all ϵ there is a setting of K, M, v such that f( 1 K !K k=1 θk) −f(θ∗) ≤ϵf with probability 1 −δ and KMv ≤ 32LR2 2D4 β4 1β2 2ϵ3 f(1 −α) log 4DR2C β1β3ϵf . [Proof sketch] This follows from setting K, M, and v as in Theorem 4 with a = L∥θ0 − θ∗∥2/(4R2) + log 1 δ , b = 1, c = C, and ϵ = ϵfL/(8R2 2). 4.3 Strongly Convex Convergence This section gives the main result for convergence that is true only in the regularized case where λ > 0. Again, the main difficulty in this proof is showing that the sum of the errors of estimated gradients at each iteration is small. This is done by using a concentration inequality to show that the error of each estimated gradient is small, and then applying a union bound to show that the sum is small. The main result is as follows. Theorem 6. When the regularization constant obeys λ > 0, with probability at least 1−δ Algorithm 1 will satisfy ∥θK −θ∗∥2 ≤(1 −λ L)K∥θ0 −θ∗∥2 + L λ %& R2 2M % 1 + & 2 log K δ ' + 2R2Cαv ' . 6 Proof sketch. When λ = 0, f is convex (as in Theorem 2) and so is strongly convex when λ > 0. The basic proof technique here is to decompose the error in a particular step as ∥ek+1∥2 ≤ ∥1 M !M i=1 t(xk i ) −Eqk[t(X)]∥2 + ∥Eqk[t(X)] −Epθk [t(X)]∥2. A multidimensional variant of Hoeffding’s inequality can bound the first term, with probability 1 −δ′ by R2(1 + " 2 log 1 δ )/ √ M, while our assumption on mixing speed can bound the second term by 2R2Cαv. Applying this to all iterations using δ′ = δ/K gives that all errors are simultaneously bounded as before. This can then be used in another result due to Schmidt et al. [26] on the convergence of gradient descent with errors in estimated gradients in the strongly convex case. A similar proof strategy could be used for the convex case where, rather than directly bounding the sum of the norm of errors of all steps using the Efron-Stein inequality and Bernstein’s bound, one could simply bound the error of each step using a multidimensional Hoeffding-type inequality, and then apply this with probability δ/K to each step. This yields a slightly weaker result than that shown in Theorem 2. The reason for applying a uniform bound on the errors in gradients here is that Schmidt et al.’s bound [26] on the convergence of proximal gradient descent on strongly convex functions depends not just on the sum of the norms of gradient errors, but a non-uniform weighted variant of these. Again, we consider how to set parameters to guarantee that θK is not too far from θ∗with a minimum amount of work. Firstly, we show a lower-bound. Theorem 7. Suppose a, b, c > 0. Then for any K, M, v such that γKa+ b √ M # log(K/δ)+cαv ≤ϵ. it must be the case that KMv ≥b2 ϵ2 log a ϵ log c ϵ (−log γ)(−log α) log $ log a ϵ δ(−log γ) % . [Proof sketch] This is established by noticing that γKa, b √ M " log K δ , and cαv must each be less than ϵ, giving lower bounds on K, M, and v. Next, we can give an explicit schedule that is not too far off from this lower-bound. Theorem 8. Suppose that a, b, c, α > 0. If β1 + β2 + β3 = 1, βi > 0, then setting K = log( a β1ϵ)/(−log γ), M = b2 ϵ2β2 2 & 1 + # 2 log(K/δ) '2 and v = log & c β3ϵ ' /(−log α) is sufficient to guarantee that γKa + b √ M (1 + # 2 log(K/δ)) + cαv ≤ϵ with a total work of at most KMV ≤ b2 ϵ2β2 2 log & a β1ϵ ' log & c β3ϵ ' (−log γ)(−log α) ⎛ ⎝1 + * 2 log log( a β1ϵ) δ(−log γ) ⎞ ⎠ 2 . For example, if you choose β2 = 1/ √ 2 and β1 = β3 = (1 −1/ √ 2)/2 ≈0.1464, then this varies from the lower-bound in Theorem 7 by a factor of two, and a multiplicative factor of 1/β3 ≈6.84 inside the logarithmic terms. Corollary 9. If we choose K ≥L λ log & ∥θ0−θ∥2 β1ϵ ' , M ≥ L2R2 2ϵ2β2 2λ2 & 1 + # 2 log(K/δ) '2 , and v ≥ 1 1−α log (2LR2C/(β3ϵλ)), then ∥θK −θ∗∥2 ≤ϵθ with probability at least 1 −δ, and the total amount of work is bounded by KMv ≤ L3R2 2ϵ2 θβ2 2λ3(1 −α) log $∥θ0 −θ∥2 β1ϵθ % 1 + * 2 log $ L λδ log $∥θ0 −θ∥2 β1ϵθ %%.2 . 5 Discussion An important detail in the previous results is that the convex analysis gives convergence in terms of the regularized log-likelihood, while the strongly-convex analysis gives convergence in terms of the parameter distance. If we drop logarithmic factors, the amount of work necessary for ϵf - optimality in the log-likelihood using the convex algorithm is of the order 1/ϵ3 f, while the amount of work necessary for ϵθ - optimality using the strongly convex analysis is of the order 1/ϵ2 θ. Though these quantities are not directly comparable, the standard bounds on sub-optimality for λ-strongly convex functions with L-Lipschitz gradients are that λϵ2 θ/2 ≤ϵf ≤Lϵ2 θ/2. Thus, roughly speaking, when regularized for the strongly-convex analysis shows that ϵf optimality in the log-likelihood can be achieved with an amount of work only linear in 1/ϵf. 7 0 20 40 60 10 −5 10 0 iterations k f(θk) − f(θ*) 0 20 40 60 10 −2 10 −1 10 0 iterations k || θk − θ* ||2 10 0 10 1 10 2 −0.4 −0.2 0 0.2 0.4 iterations k Figure 2: Ising Model Example. Left: The difference of the current test log-likelihood from the optimal log-likelihood on 5 random runs. Center: The distance of the current estimated parameters from the optimal parameters on 5 random runs. Right: The current estimated parameters on one run, as compared to the optimal parameters (far right). 6 Example While this paper claims no significant practical contribution, it is useful to visualize an example. Take an Ising model p(x) ∝exp(! (i,j)∈Pairs θijxixj) for xi ∈{−1, 1} on a 4 × 4 grid with 5 random vectors as training data. The sufficient statistics are t(x) = {xixj|(i, j) ∈Pairs}, and with 24 pairs, ∥t(x)∥2 ≤R2 = √ 24. For a fast-mixing set, constrain |θij| ≤.2 for all pairs. Since the maximum degree is 4, τ(ϵ) ≤⌈N log(N/ϵ) 1−4 tanh(.2)⌉. Fix λ = 1, ϵθ = 2 and δ = 0.1. Though the theory above suggests the Lipschitz constant L = 4R2 2 + λ = 97, a lower value of L = 10 is used, which converged faster in practice (with exact or approximate gradients). Now, one can derive that ∥θ0 −θ∗∥2 ≤D = " 24 × (2 × .2)2, C = log(16) and α = exp(−(1 −4 tanh .2)/16). Applying Corollary 9 with β1 = .01, β2 = .9 and β3 = .1 gives K = 46, M = 1533 and v = 561. Fig. 2 shows the results. In practice, the algorithm finds a solution tighter than the specified ϵθ, indicating a degree of conservatism in the theoretical bound. 7 Conclusions This section discusses some weaknesses of the above analysis, and possible directions for future work. Analyzing complexity in terms of the total sampling effort ignores the complexity of projection itself. Since projection only needs to be done K times, this time will often be very small in comparison to sampling time. (This is certainly true in the above example.) However, this might not be the case if the projection algorithm scales super-linearly in the size of the model. Another issue to consider is how the samples are initialized. As far as the proof of correctness goes, the initial distribution r is arbitrary. In the above example, a simple uniform distribution was used. However, one might use the empirical distribution of the training data, which is equivalent to contrastive divergence [5]. It is reasonable to think that this will tend to reduce the mixing time when the pθ is close to the model generating the data. However, the number of Markov chain transitions v prescribed above is larger than typically used with contrastive divergence, and Algorithm 1 does not reduce the step size over time. While it is common to regularize to encourage fast mixing with contrastive divergence [14, Section 10], this is typically done with simple heuristic penalties. Further, contrastive divergence is often used with hidden variables. Still, this provides a bound for how closely a variant of contrastive divergence could approximate the maximum likelihood solution. The above analysis does not encompass the common strategy for maximum likelihood learning where one maintains a “pool” of samples between iterations, and initializes one Markov chain at each iteration from each element of the pool. The idea is that if the samples at the previous iteration were close to pk−1 and pk−1 is close to pk, then this provides an initialization close to the current solution. However, the proof technique used here is based on the assumption that the samples xk i at each iteration are independent, and so cannot be applied to this strategy. Acknowledgements Thanks to Ivona Bezáková, Aaron Defazio, Nishant Mehta, Aditya Menon, Cheng Soon Ong and Christfried Webers. NICTA is funded by the Australian Government through the Dept. of Communications and the Australian Research Council through the ICT Centre of Excellence Program. 8 References [1] Abbeel, P., Koller, D., and Ng, A. Learning factor graphs in polynomial time and sample complexity. Journal of Machine Learning Research, 7:1743–1788, 2006. [2] Asuncion, A., Liu, Q., Ihler, A., and Smyth, P. Learning with blocks composite likelihood and contrastive divergence. In AISTATS, 2010. [3] Besag, J. Statistical analysis of non-lattice data. Journal of the Royal Statistical Society. Series D (The Statistician), 24(3):179–195, 1975. [4] Boucheron, S., Lugosi, G., and Massart, P. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press, 2013. [5] Carreira-Peripiñán, M. A. and Hinton, G. On contrastive divergence learning. In AISTATS, 2005. [6] Chow, C. I. and Liu, C. N. Approximating discrete probability distributions with dependence trees. IEEE Transactions on Information Theory, 14:462–467, 1968. [7] Descombes, X., Robin Morris, J. Z., and Berthod, M. Estimation of markov Random field prior parameters using Markov chain Monte Carlo maximum likelihood. IEEE Transactions on Image Processing, 8 (7):954–963, 1996. [8] Domke, J. and Liu, X. Projecting Ising model parameters for fast mixing. In NIPS, 2013. [9] Dyer, M. E., Goldberg, L. A., and Jerrum, M. Matrix norms and rapid mixing for spin systems. Ann. Appl. Probab., 19:71–107, 2009. [10] Geyer, C. Markov chain Monte Carlo maximum likelihood. In Symposium on the Interface, 1991. [11] Gu, M. G. and Zhu, H.-T. Maximum likelihood estimation for spatial models by Markov chain Monte Carlo stochastic approximation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63(2):339–355, 2001. [12] Hayes, T. A simple condition implying rapid mixing of single-site dynamics on spin systems. In FOCS, 2006. [13] Heinemann, U. and Globerson, A. Inferning with high girth graphical models. In ICML, 2014. [14] Hinton, G. A practical guide to training restricted boltzmann machines. Technical report, University of Toronto, 2010. [15] Huber, M. Simulation reductions for the ising model. Journal of Statistical Theory and Practice, 5(3): 413–424, 2012. [16] Hyvärinen, A. Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6:695–709, 2005. [17] Jerrum, M. and Sinclair, A. Polynomial-time approximation algorithms for the ising model. SIAM Journal on Computing, 22:1087–1116, 1993. [18] Koller, D. and Friedman, N. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009. [19] Levin, D. A., Peres, Y., and Wilmer, E. L. Markov chains and mixing times. American Mathematical Society, 2006. [20] Lindsay, B. Composite likelihood methods. Contemporary Mathematics, 80(1):221–239, 1988. [21] Liu, X. and Domke, J. Projecting Markov random field parameters for fast mixing. In NIPS, 2014. [22] Marlin, B. and de Freitas, N. Asymptotic efficiency of deterministic estimators for discrete energy-based models: Ratio matching and pseudolikelihood. In UAI, 2011. [23] Mizrahi, Y., Denil, M., and de Freitas, N. Linear and parallel learning of markov random fields. In ICML, 2014. [24] Papandreou, G. and Yuille, A. L. Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. In ICCV, 2011. [25] Salakhutdinov, R. Learning in Markov random fields using tempered transitions. In NIPS, 2009. [26] Schmidt, M., Roux, N. L., and Bach, F. Convergence rates of inexact proximal-gradient methods for convex optimization. In NIPS, 2011. [27] Schmidt, U., Gao, Q., and Roth, S. A generative perspective on MRFs in low-level vision. In CVPR, 2010. [28] Steinhardt, J. and Liang, P. Learning fast-mixing models for structured prediction. In ICML, 2015. [29] Tieleman, T. Training restricted Boltzmann machines using approximations to the likelihood gradient. In ICML, 2008. [30] Varin, C., Reid, N., and Firth, D. An overview of composite likelihood methods. Statistica Sinica, 21: 5–24, 2011. [31] Wainwright, M. Estimating the "wrong" graphical model: Benefits in the computation-limited setting. Journal of Machine Learning Research, 7:1829–1859, 2006. [32] Wainwright, M. and Jordan, M. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1–305, 2008. [33] Zhu, S. C., Wu, Y., and Mumford, D. Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling. International Journal of Computer Vision, 27(2):107–126, 1998. 9 | 2015 | 41 |
5,930 | Sampling from Probabilistic Submodular Models Alkis Gotovos ETH Zurich alkisg@inf.ethz.ch S. Hamed Hassani ETH Zurich hamed@inf.ethz.ch Andreas Krause ETH Zurich krausea@ethz.ch Abstract Submodular and supermodular functions have found wide applicability in machine learning, capturing notions such as diversity and regularity, respectively. These notions have deep consequences for optimization, and the problem of (approximately) optimizing submodular functions has received much attention. However, beyond optimization, these notions allow specifying expressive probabilistic models that can be used to quantify predictive uncertainty via marginal inference. Prominent, well-studied special cases include Ising models and determinantal point processes, but the general class of log-submodular and log-supermodular models is much richer and little studied. In this paper, we investigate the use of Markov chain Monte Carlo sampling to perform approximate inference in general log-submodular and log-supermodular models. In particular, we consider a simple Gibbs sampling procedure, and establish two sufficient conditions, the first guaranteeing polynomial-time, and the second fast (O(n log n)) mixing. We also evaluate the efficiency of the Gibbs sampler on three examples of such models, and compare against a recently proposed variational approach. 1 Introduction Modeling notions such as coverage, representativeness, or diversity is an important challenge in many machine learning problems. These notions are well captured by submodular set functions. Analogously, supermodular functions capture notions of smoothness, regularity, or cooperation. As a result, submodularity and supermodularity, akin to concavity and convexity, have found numerous applications in machine learning. The majority of previous work has focused on optimizing such functions, including the development and analysis of algorithms for minimization [10] and maximization [9,26], as well as the investigation of practical applications, such as sensor placement [21], active learning [12], influence maximization [19], and document summarization [25]. Beyond optimization, though, it is of interest to consider probabilistic models defined via submodular functions, that is, distributions over finite sets (or, equivalently, binary random vectors) defined as p(S) ∝exp(βF(S)), where F : 2V →R is a submodular or supermodular function (equivalently, either F or −F is submodular), and β ≥0 is a scaling parameter. Finding most likely sets in such models captures classical submodular optimization. However, going beyond point estimates, that is, performing general probabilistic (e.g., marginal) inference in them, allows us to quantify uncertainty given some observations, as well as learn such models from data. Only few special cases belonging to this class of models have been extensively studied in the past; most notably, Ising models [20], which are log-supermodular in the usual case of attractive (ferromagnetic) potentials, or log-submodular under repulsive (anti-ferromagnetic) potentials, and determinantal point processes [23], which are log-submodular. Recently, Djolonga and Krause [6] considered a more general treatment of such models, and proposed a variational approach for performing approximate probabilistic inference for them. It is natural to ask to what degree the usual alternative to variational methods, namely Monte Carlo sampling, is applicable to these models, and how it performs in comparison. To this end, in this paper 1 we consider a simple Markov chain Monte Carlo (MCMC) algorithm on log-submodular and logsupermodular models, and provide a first analysis of its performance. We present two theoretical conditions that respectively guarantee polynomial-time and fast (O(n log n)) mixing in such models, and experimentally compare against the variational approximations on three examples. 2 Problem Setup We start by considering set functions F : 2V →R, where V is a finite ground set of size |V | = n. Without loss of generality, if not otherwise stated, we will hereafter assume that V = [n] := {1, 2, . . . , n}. The marginal gain obtained by adding element v ∈V to set S ⊆V is defined as F(v|S) := F(S ∪{v}) −F(S). Intuitively, submodularity expresses a notion of diminishing returns; that is, adding an element to a larger set provides less benefit than adding it to a smaller one. More formally, F is submodular if, for any S ⊆T ⊆V , and any v ∈V \ T, it holds that F(v|T) ≤F(v|S). Supermodularity is defined analogously by reversing the sign of this inequality. In particular, if a function F is submodular, then the function −F is supermodular. If a function m is both submodular and supermodular, then it is called modular, and may be written in the form m(S) = c + P v∈S mv, where c ∈R, and mv ∈R, for all v ∈V . Our main focus in this paper are distributions over the powerset of V of the form p(S) = exp(βF(S)) Z , (1) for all S ⊆V , where F is submodular or supermodular. The scaling parameter β is referred to as inverse temperature, and distributions of the above form are called log-submodular or logsupermodular respectively. The constant denominator Z := Z(β) := P S⊆V exp(βF(S)) serves the purpose of normalizing the distribution and is called the partition function of p. An alternative and equivalent way of defining distributions of the above form is via binary random vectors X ∈ {0, 1}n. If we define V (X) := {v ∈V | Xv = 1}, it is easy to see that the distribution pX(X) ∝ exp(βF(V (X))) over binary vectors is isomorphic to the distribution over sets of (1). With a slight abuse of notation, we will use F(X) to denote F(V (X)), and use p to refer to both distributions. Example models The (ferromagnetic) Ising model is an example of a log-supermodular model. In its simplest form, it is defined through an undirected graph (V, E), and a set of pairwise potentials σv,w(S) := 4(1{v∈S} −0.5)(1{w∈S} −0.5). Its distribution has the form p(S) ∝ exp(β P {v,w}∈E σv,w(S)), and is log-supermodular, because F(S) = P {v,w}∈E σv,w(S) is supermodular. (Each σv,w is supermodular, and supermodular functions are closed under addition.) Determinantal point processes (DPPs) are examples of log-submodular models. A DPP is defined via a positive semidefinite matrix K ∈Rn×n, and has a distribution of the form p(S) ∝det(KS), where KS denotes the square submatrix indexed by S. Since F(S) = ln det(KS) is a submodular function, p is log-submodular. Another example of log-submodular models are those defined through facility location functions, which have the form F(S) = P ℓ∈[L] maxv∈S wv,ℓ, where wv,ℓ≥0, and are submodular. If wv,ℓ∈{0, 1}, then F represents a set cover function. Note that, both the facility location model and the Ising model use decomposable functions, that is, functions that can be written as a sum of simpler submodular (resp. supermodular) functions Fℓ: F(S) = X ℓ∈[L] Fℓ(S). (2) Marginal inference Our goal is to perform marginal inference for the distributions described above. Concretely, for some fixed A ⊆B ⊆V , we would like to compute the probability of sets S that contain all elements of A, but no elements outside of B, that is, p(A ⊆S ⊆B). More generally, we are interested in computing conditional probabilities of the form p(A ⊆S ⊆B | C ⊆S ⊆D). This computation can be reduced to computing unconditional marginals as follows. For any C ⊆V , define the contraction of F on C, FC : 2V \C →R, by FC(S) = F(S∪C)−F(S), for all S ⊆V \C. Also, for any D ⊆V , define the restriction of F to D, F D : 2D →R, by F D(S) = F(S), for all S ⊆D. If F is submodular, then its contractions and restrictions are also submodular, and, thus, (FC)D is submodular. Finally, it is easy to see that p(S | C ⊆S ⊆D) ∝exp(β(FC)D(S)). In 2 Algorithm 1 Gibbs sampler Input: Ground set V , distribution p(S) ∝exp(βF(S)) 1: X0 ←random subset of V 2: for t = 0 to Niter do 3: v ←Unif(V ) 4: ∆F (v|Xt) ←F(Xt ∪{v}) −F(Xt \ {v}) 5: padd ←exp(β∆F (v|Xt))/(1 + exp(β∆F (v|Xt))) 6: z ←Unif([0, 1]) 7: if z ≤padd then Xt+1 ←Xt ∪{v} else Xt+1 ←Xt \ {v} 8: end for our experiments, we consider computing marginals of the form p(v ∈S | C ⊆S ⊆D), for some v ∈V , which correspond to A = {v}, and B = V . 3 Sampling and Mixing Times Performing exact inference in models defined by (1) boils down to computing the partition function Z. Unfortunately, this is generally a #P-hard problem, which was shown to be the case even for Ising models by Jerrum and Sinclair [17]. However, they also proposed a sampling-based FPRAS for a class of ferromagnetic models, which gives us hope that it may be possible to efficiently perform approximate inference in more general models under suitable conditions. MCMC sampling [24] approaches are based on performing randomly selected local moves in a state space E to approximately compute quantities of interest. The visited states (X0, X1, . . .) form a Markov chain, which under mild conditions converges to a stationary distribution π. Crucially, the probabilities of transitioning from one state to another are carefully chosen to ensure that the stationary distribution is identical to the distribution of interest. In our case, the state space is the powerset of V (equivalently, the space of all binary vectors of length n), and to approximate the marginal probabilities of p we construct a chain over subsets of V that has stationary distribution p. The Gibbs sampler In this paper, we focus on one of the simplest and most commonly used chains, namely the Gibbs sampler, also known as the Glauber chain. We denote by P the transition matrix of the chain; each element P(x, y) corresponds to the conditional probability of transitioning from state x to state y, that is, P(x, y) := P[Xt+1 = y | Xt = x], for any x, y ∈E, and any t ≥0. We also define an adjacency relation x ∼y on the elements of the state space, which denotes that x and y differ by exactly one element. It follows that each x ∈E has exactly n neighbors. The Gibbs sampler is defined by an iterative two-step procedure, as shown in Algorithm 1. First, it selects an element v ∈V uniformly at random; then, it adds or removes v to the current state Xt according to the conditional probability of the resulting state. Importantly, the conditional probabilities that need to be computed do not depend on the partition function Z, thus the chain can be simulated efficiently, even though Z is unknown and hard to compute. Moreover, it is easy to see that ∆F (v|Xt) = 1{v̸∈Xt}F(v|Xt) + 1{v∈Xt}F(v|Xt \ {v}); thus, the sampler only requires a black box for the marginal gains of F, which are often faster to compute than the values of F itself. Finally, it is easy to show that the stationary distribution of the chain constructed this way is p. Mixing times Approximating quantities of interest using MCMC methods is based on using time averages to estimate expectations over the desired distribution. In particular, we estimate the expected value of function f : E →R by Ep[f(X)] ≈(1/T) PT r=1 f(Xs+r). For example, to estimate the marginal p(v ∈S), for some v ∈V , we would define f(x) = 1{xv=1}, for all x ∈E. The choice of burn-in time s and number of samples T in the above expression presents a tradeoff between computational efficiency and approximation accuracy. It turns out that the effect of both s and T is largely dependent on a fundamental quantity of the chain called mixing time [24]. The mixing time of a chain quantifies the number of iterations t required for the distribution of Xt to be close to the stationary distribution π. More formally, it is defined as tmix(ϵ) := min {t | d(t) ≤ϵ}, where d(t) denotes the worst-case (over the starting state X0 of the chain) total variation distance between the distribution of Xt and π. Establishing upper bounds on the mix3 ing time of our Gibbs sampler is, therefore, sufficient to guarantee efficient approximate marginal inference (e.g., see [24, Theorem 12.19]). 4 Theoretical Results In the previous section we mentioned that exact computation of the partition function for the class of models we consider here is, in general, infeasible. Only for very few exceptions, such as DPPs, is exact inference possible in polynomial time [23]. Even worse, it has been shown that the partition function of general Ising models is hard to approximate; in particular, there is no FPRAS for these models, unless RP = NP. [17] This implies that the mixing time of any Markov chain with such a stationary distribution will, in general, be exponential in n. It is, therefore, our aim to derive sufficient conditions that guarantee sub-exponential mixing times for the general class of models. In some of our results we will use the fact that any submodular function F can be written as F = c + m + f, (3) where c ∈R is a constant that has no effect on distributions defined by (1); m is a normalized (m(∅) = 0) modular function; and f is a normalized (f(∅) = 0) monotone submodular function, that is, it additionally satisfies the monotonicity property f(v|S) ≥0, for all v ∈V , and all S ⊆V . A similar decomposition is possible for any supermodular function as well. 4.1 Polynomial-time mixing Our guarantee for mixing times that are polynomial in n depends crucially on the following quantity, which is defined for any set function F : 2V →R: ζF := max A,B⊆V |F(A) + F(B) −F(A ∪B) −F(A ∩B)| . Intuitively, ζF quantifies a notion of distance to modularity. To see this, note that a function F is modular if and only if F(A) + F(B) = F(A ∪B) + F(A ∩B), for all A, B ⊆V . For modular functions, therefore, we have ζF = 0. Furthermore, a function F is submodular if and only if F(A) + F(B) ≥F(A ∪B) + F(A ∩B), for all A, B ⊆V . Similarly, F is supermodular if the above holds with the sign reversed. It follows that for submodular and supermodular functions, ζF represents the worst-case amount by which F violates the modular equality. It is also important to note that, for submodular and supermodular functions, ζF depends only on the monotone part of F; if we decompose F according to (3), then it is easy to see that ζF = ζf. A trivial upper bound on ζF , therefore, is ζF ≤f(V ). Another quantity that has been used in the past to quantify the deviation of a submodular function from modularity is the curvature [4], defined as κF := 1−minv∈V (F(v|V \ {v})/F(v)). Although of similar intuitive meaning, the multiplicative nature of its definition makes it significantly different from ζF , which is defined additively. As an example of a function class with ζF that do not depend on n, assume a ground set V = SL ℓ=1 Vℓ, and consider functions F(S) = PL ℓ=1 φ(|S ∩Vℓ|), where φ : R →R is a bounded concave function, for example, φ(x) = min{φmax, x}. Functions of this form are submodular, and have been used in applications such as document summarization to encourage diversity [25]. It is easy to see that, for such functions, ζF ≤Lφmax, that is, ζF is independent of n. The following theorem establishes a bound on the mixing time of the Gibbs sampler run on models of the form (1). The bound is exponential in ζF , but polynomial in n. Theorem 1. For any function F : 2V →R, the mixing time of the Gibbs sampler is bounded by tmix(ϵ) ≤2n2 exp(2βζF ) log 1 ϵpmin , where pmin := min S∈E p(S). If F is submodular or supermodular, then the bound is improved to tmix(ϵ) ≤2n2 exp(βζf) log 1 ϵpmin . 4 Note that, since the factor of two that constitutes the difference between the two statements of the theorem lies in the exponent, it can have a significant impact on the above bounds. The dependence on pmin is related to the (worst-case) starting state of the chain, and can be eliminated if we have a way to guarantee a high-probability starting state. If F is submodular or supermodular, this is usually straightforward to accomplish by using one of the standard constant-factor optimization algorithms [10,26] as a preprocessing step. More generally, if F is bounded by 0 ≤F(S) ≤Fmax, for all S ⊆V , then log(1/pmin) = O(nβFmax). Canonical paths Our proof of Theorem 1 is based on the method of canonical paths [5,15,16,28]. The high-level idea of this method is to view the state space as a graph, and try to construct a path between each pair of states that carries a certain amount of flow specified by the stationary distribution under consideration. Depending on the choice of these paths and the resulting load on the edges of the graph, we can derive bounds on the mixing time of the Markov chain. More concretely, let us assume that for some set function F and corresponding distribution p as in (1), we construct the Gibbs chain on state space E = 2V with transition matrix P. We can view the state space as a directed graph that has vertex set E, and for any A, B ∈E, contains edge (S, S′) if and only if S ∼S′, that is, if and only if S and S′ differ by exactly one element. Now, assume that, for any pair of states A, B ∈E, we define what is called a canonical path γAB := (A = S0, S1, . . . , Sℓ= B), such that all (Si, Si+1) are edges in the above graph. We denote the length of path γAB by |γAB|, and define Q(S, S′) := p(S)P(S, S′). We also denote the set of all pairs of states whose canonical path goes through (S, S′) by CSS′ := {(A, B) ∈E × E | (S, S′) ∈γAB}. The following quantity, referred to as the congestion of an edge, uses a collection of canonical paths to quantify to what amount that edge is overloaded: ρ(S, S′) := 1 Q(S, S′) X (A,B)∈CSS′ p(A)p(B)|γAB|. (4) The denominator Q(S, S′) quantifies the capacity of edge (S, S′), while the sum represents the total flow through that edge according to the choice of canonical paths. The congestion of the whole graph is then defined as ρ := maxS∼S′ ρ(S, S′). Low congestion implies that there are no bottlenecks in the state space, and the chain can move around fast, which also suggests rapid mixing. The following theorem makes this concrete. Theorem 2 ([15, 28]). For any collection of canonical paths with congestion ρ, the mixing time of the chain is bounded by tmix(ϵ) ≤ρ log 1 ϵpmin . Proof outline of Theorem 1 To apply Theorem 2 to our class of distributions, we need to construct a set of canonical paths in the corresponding state space 2V , and upper bound the resulting congestion. First, note that, to transition from state A ∈E to state B ∈E, in our case, it is enough to remove the elements of A\B and add the elements of B\A. Each removal and addition corresponds to an edge in the state space graph, and the order of these operations identify a canonical path in this graph that connects A to B. For our analysis, we assume a fixed order on V (e.g., the natural order of the elements themselves), and perform the operations according to this order. Having defined the set of canonical paths, we proceed to bounding the congestion ρ(S, S′) for any edge (S, S′). The main difficulty in bounding ρ(S, S′) is due to the sum in (4) over all pairs in CSS′. To simplify this sum we construct for each edge (S, S′) an injective map ηSS′ : CSS′ →E; this is a combinatorial encoding technique that has been previously used in similar proofs to ours [15]. We then prove the following key lemma about these maps. Lemma 1. For any S ∼S′, and any A, B ∈E, it holds that p(A)p(B) ≤2n exp(2βζF )Q(S, S′)p(ηSS′(A, B)). Since ηSS′ is injective, it follows that P (A,B)∈CSS′ p(ηSS′(A, B)) ≤1. Furthermore, it is clear that each canonical path γAB has length |γAB| ≤n, since we need to add and/or remove at most n elements to get from state A to state B. Combining these two facts with the above lemma, we get ρ(S, S′) ≤2n2 exp(2βζF ). If F is submodular or supermodular, we show that the dependence on ζF in Lemma 1 is improved to exp(βζF ). More details can be found in the longer version of the paper. 5 4.2 Fast mixing We now proceed to show that, under some stronger conditions, we are able to establish even faster— O(n log n)—mixing. For any function F, we denote ∆F (v|S) := F(S ∪{v}) −F(S \ {v}), and define the following quantity, γF,β := max S⊆V r∈V X v∈V tanh β 2 ∆F (v|S) −∆F (v|S ∪{r}) , which quantifies the (maximum) total influence of an element r ∈V on the values of F. For example, if the inclusion of r makes no difference with respect to other elements of the ground set, we will have γF,β = 0. The following theorem establishes conditions for fast mixing of the Gibbs sampler when run on models of the form (1). Theorem 3. For any set function F : 2V →R, if γF,β < 1, then the mixing time of the Gibbs sampler is bounded by tmix(ϵ) ≤ 1 1 −γF,β n(log n + log 1 ϵ ). If F is additionally submodular or supermodular, and is decomposed according to (3), then tmix(ϵ) ≤ 1 1 −γf,β n(log n + log 1 ϵ ). Note that, in the second part of the theorem, γf,β depends only on the monotone part of F. We have seen in Section 2 that some commonly used models are based on decomposable functions that can be written in the form (2). We prove the following corollary that provides an easy to check condition for fast mixing of the Gibbs sampler when F is a decomposable submodular function. Corollary 1. For any submodular function F that can be written in the form of (2), with f being its monotone (also decomposable) part according to (3), if we define θf := max v∈V X ℓ∈[L] p fℓ(v) and λf := max ℓ∈[L] X v∈V p fℓ(v), then it holds that γf,β ≤β 2 θfλf. For example, applying this to the facility location model defined in Section 2, we get θf = maxv PL ℓ=1 √wv,ℓ, and λf = maxℓ P v∈V √wv,ℓ, and obtain fast mixing if θfλf ≤2/β. As a special case, if we consider the class of set cover functions (wv,ℓ∈{0, 1}), such that each v ∈V covers at most δ sets, and each set ℓ∈[L] is covered by at most δ elements, then θf, λf ≤δ, and we obtain fast mixing if δ2 ≤2/β. Note, that the corollary can be trivially applied to any submodular function by taking L = 1, but may, in general, result in a loose bound if used that way. Coupling Our proof of Theorem 3 is based on the coupling technique [1]; more specifically, we use the path coupling method [2,15,24]. Given a Markov chain (Xt) on state space E with transition matrix P, a coupling for (Zt) is a new Markov chain (Xt, Yt) on state space E × E, such that both (Xt) and (Yt) are by themselves Markov chains with transition matrix P. The idea is to construct the coupling in such a way that, even when the starting points X0 and Y0 are different, the chains (Xt) and (Yt) tend to coalesce. Then, it can be shown that the coupling time tcouple := min {t ≥0 | Xt = Yt} is closely related to the mixing time of the original chain (Zt). [24] The main difficulty in applying the coupling approach lies in the construction of the coupling itself, for which one needs to consider any possible pair of states (Yt, Zt). The path coupling technique makes this construction easier by utilizing the same state-space graph that we used to define canonical paths in Section 4.1. The core idea is to first define a coupling only over adjacent states, and then extend it for any pair of states by using a metric on the graph. More concretely, let us denote by d : E × E →R the path metric on state space E; that is, for any x, y ∈E, d(x, y) is the minimum length of any path from x to y in the state space graph. The following theorem establishes fast mixing using this metric, as well as the diameter of the state space, diam(E) := maxx,y∈E d(x, y). 6 Theorem 4 ([2, 24]). For any Markov chain (Zt), if (Xt, Yt) is a coupling, such that, for some a ≥0, and any x, y ∈E with x ∼y, it holds that E[d(Xt+1, Yt+1) | Xt = x, Yt = y] ≤e−αd(x, y), then the mixing time of the original chain is bounded by tmix(ϵ) ≤1 α log(diam(E)) + log 1 ϵ . Proof outline of Theorem 3 In our case, the path metric d is the Hamming distance between the binary vectors representing the states (equivalently, the number of elements by which two sets differ). We need to construct a suitable coupling (Xt, Yt) for any pair of states x ∼y. Consider the two corresponding sets S, R ⊆V that differ by exactly one element, and assume that R = S ∪{r}, for some r ∈V . (The case S = R ∪{s} for some s ∈V is completely analogous.) Remember that the Gibbs sampler first chooses an element v ∈V uniformly at random, and then adds or removes it according to the conditional probabilities. Our goal is to make the same updates happen to both S and R as frequently as possible. As a first step, we couple the candidate element for update v ∈V to always be the same in both chains. Then, we have to distinguish between the following cases. If v = r, then the conditionals for both chains are identical, therefore we can couple both chains to add r with probability padd := p(S ∪{r})/(p(S) + p(S ∪{r})), which will result in new sets S′ = R′ = S ∪{r}, or remove r with probability 1 −padd, which will result in new sets S′ = R′ = S. Either way, we will have d(S′, R′) = 0. If v ̸= r, we cannot always couple the updates of the chains, because the conditional probabilities of the updates are different. In fact, we are forced to have different updates (one chain adding v, the other chain removing v) with probability equal to the difference of the corresponding conditionals, which we denote here by pdif(v). If this is the case, we will have d(S′, R′) = 2, otherwise the chains will make the same update and will still differ only by element r, that is, d(S′, R′) = 1. Putting together all the above, we get the following expected distance after one step: E[d(S′, R′)] = 1 −1 n + 1 n X v̸=r pdif(v) ≤1 −1 n(1 −γF,β) ≤exp −1 −γF,β n . Our result follows from applying Theorem 4 with α = γF,β/n, noting that diam(E) = n. 5 Experiments We compare the Gibbs sampler against the variational approach proposed by Djolonga and Krause [6] for performing inference in models of the form (1), and use the same three models as in their experiments. We briefly review here the experimental setup and refer to their paper for more details. The first is a (log-submodular) facility location model with an added modular term that penalizes the number of selected elements, that is, p(S) ∝exp(F(S) −2|S|), where F is a submodular facility location function. The model is constructed from randomly subsampling real data from a problem of sensor placement in a water distribution network [22]. In the experiments, we iteratively condition on random observations for each variable in the ground set. The second is a (log-supermodular) pairwise Markov random field (MRF; a generalized Ising model with varying weights), constructed by first randomly sampling points from a 2-D two-cluster Gaussian mixture model, and then introducing a pairwise potential for each pair of points with exponentially-decreasing weight in the distance of the pair. In the experiments, we iteratively condition on pairs of observations, one from each cluster. The third is a (log-supermodular) higher-order MRF, which is constructed by first generating a random Watts-Strogatz graph, and then creating one higher-order potential per node, which contains that node and all of its neighbors in the graph. The strength of the potentials is controlled by a parameter µ, which is closely related to the curvature of the functions that define them. In the experiments, we vary this parameter from 0 (modular model) to 1 (“strongly” supermodular model). For all three models, we constrain the size of the ground set to n = 20, so that we are able to compute, and compare against, the exact marginals. Furthermore, we run multiple repetitions for each model to account for the randomness of the model instance, and the random initialization of 7 0 2 4 6 8 10 12 14 16 18 0 0.05 0.1 0.15 Number of conditioned elements Var (upper) Var (lower) Gibbs (100) Gibbs (500) Gibbs (2000) (a) Facility location 0 1 2 3 4 5 6 7 8 9 0 0.1 0.2 Number of conditioned pairs Var (upper) Var (lower) Gibbs (100) Gibbs (500) Gibbs (2000) (b) Pairwise MRF 0 0.2 0.4 0.6 0.8 1 0 0.02 0.04 0.06 0.08 0.1 µ Var (upper) Var (lower) Gibbs (100) Gibbs (500) Gibbs (2000) (c) Higher-order MRF Figure 1: Absolute error of the marginals computed by the Gibbs sampler compared to variational inference [6]. A modest 500 Gibbs iterations outperform the variational method for the most part. the Gibbs sampler. The marginals we compute are of the form p(v ∈S | C ⊆S ⊆D), for all v ∈V . We run the Gibbs sampler for 100, 500, and 2000 iterations on each problem instance. In compliance with recommended MCMC practice [11], we discard the first half of the obtained samples as burn-in, and only use the second half for estimating the marginals. Figure 1 compares the average absolute error of the approximate marginals with respect to the exact ones. The averaging is performed over v ∈V , and over the different repetitions of each experiment; errorbars depict two standard errors. The two variational approximations are obtained from factorized distributions associated with modular lower and upper bounds respectively [6]. We notice a similar trend on all three models. For the regimes that correspond to less “peaked” posterior distributions (small number of conditioned variables, small µ), even 100 Gibbs iterations outperform both variational approximations. The latter gain an advantage when the posterior is concentrated around only a few states, which happens after having conditioned on almost all variables in the first two models, or for µ close to 1 in the third model. 6 Further Related Work In contemporary work to ours, Rebeschini and Karbasi [27] analyzed the mixing times of logsubmodular models. Using a method based on matrix norms, which was previously introduced by Dyer et al. [7], and is closely related to path coupling, they arrive at a similar—though not directly comparable—condition to the one we presented in Theorem 3. Iyer and Bilmes [13] recently considered a different class of probabilistic models, called submodular point processes, which are also defined through submodular functions, and have the form p(S) ∝F(S). They showed that inference in SPPs is, in general, also a hard problem, and provided approximations and closed-form solutions for some subclasses. The canonical path method for bounding mixing times has been previously used in applications, such as approximating the partition function of ferromagnetic Ising models [17], approximating matrix permanents [16, 18], and counting matchings in graphs [15]. The most prominent application of coupling-based methods is counting k-colorings in low-degree graphs [3,14,15]. Other applications include counting independent sets in graphs [8], and approximating the partition function of various subclasses of Ising models at high temperatures [24]. 7 Conclusion We considered the problem of performing marginal inference using MCMC sampling techniques in probabilistic models defined through submodular functions. In particular, we presented for the first time sufficient conditions to obtain upper bounds on the mixing time of the Gibbs sampler in general log-submodular and log-supermodular models. Furthermore, we demonstrated that, in practice, the Gibbs sampler compares favorably to previously proposed variational approximations, at least in regimes of high uncertainty. We believe that this is an important step towards a unified framework for further analysis and practical application of this rich class of probabilistic submodular models. Acknowledgments This work was partially supported by ERC Starting Grant 307036. 8 References [1] David Aldous. Random walks on finite groups and rapidly mixing markov chains. In Seminaire de Probabilites XVII. Springer, 1983. [2] Russ Bubley and Martin Dyer. Path coupling: A technique for proving rapid mixing in markov chains. In Symposium on Foundations of Computer Science, 1997. [3] Russ Bubley, Martin Dyer, and Catherine Greenhill. Beating the 2d bound for approximately counting colourings: A computer-assisted proof of rapid mixing. In Symposium on Discrete Algorithms, 1998. [4] Michele Conforti and Gerard Cornuejols. Submodular set functions, matroids and the greedy algorithm: Tight worst-case bounds and some generalizations of the rado-edmonds theorem. Disc. App. Math., 1984. [5] Persi Diaconis and Daniel Stroock. Geometric bounds for eigenvalues of markov chains. The Annals of Applied Probability, 1991. [6] Josip Djolonga and Andreas Krause. From MAP to marginals: Variational inference in bayesian submodular models. In Neural Information Processing Systems, 2014. [7] Martin Dyer, Leslie Ann Goldberg, and Mark Jerrum. Matrix norms and rapid mixing for spin systems. Annals of Applied Probability, 2009. [8] Martin Dyer and Catherine Greenhill. On markov chains for independent sets. J. of Algorithms, 2000. [9] Uriel Feige, Vahab S. Mirrokni, and Jan Vondrak. Maximizing non-monotone submodular functions. In Symposium on Foundations of Computer Science, 2007. [10] Satoru Fujishige. Submodular Functions and Optimization. Elsevier Science, 2005. [11] Andrew Gelman and Kenneth Shirley. Innovation and intellectual property rights. In Handbook of Markov Chain Monte Carlo. CRC Press, 2011. [12] Daniel Golovin and Andreas Krause. Adaptive submodularity: Theory and applications in active learning and stochastic optimization. Journal of Artificial Intelligence Research, 2011. [13] Rishabh Iyer and Jeff Bilmes. Submodular point processes with applications in machine learning. In International Conference on Artificial Intelligence and Statistics, 2015. [14] Mark Jerrum. A very simple algorithm for estimating the number of k-colorings of a low-degree graph. Random Structures and Algorithms, 1995. [15] Mark Jerrum. Counting, Sampling and Integrating: Algorithms and Complexity. Birkh¨auser, 2003. [16] Mark Jerrum and Alistair Sinclair. Approximating the permanent. SIAM Journal on Computing, 1989. [17] Mark Jerrum and Alistair Sinclair. Polynomial-time approximation algorithms for the Ising model. SIAM Journal on Computing, 1993. [18] Mark Jerrum, Alistair Sinclair, and Eric Vigoda. A polynomial-time approximation algorithm for the permanent of a matrix with non-negative entries. Journal of the ACM, 2004. [19] David Kempe, Jon Kleinberg, and Eva Tardos. Maximizing the spread of influence through a social network. In Conference on Knowledge Discovery and Data Mining, 2003. [20] Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques. The MIT Press, 2009. [21] Andreas Krause, Carlos Guestrin, Anupam Gupta, and Jon Kleinberg. Near-optimal sensor placements: Maximizing information while minimizing communication cost. In Information Processing in Sensor Networks, 2006. [22] Andreas Krause, Jure Leskovec, Carlos Guestrin, Jeanne Vanbriesen, and Christos Faloutsos. Efficient sensor placement optimization for securing large water distribution networks. Journal of Water Resources Planning and Management, 2008. [23] Alex Kulesza and Ben Taskar. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 2012. [24] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov Chains and Mixing Times. American Mathematical Society, 2008. [25] Hui Lin and Jeff Bilmes. A class of submodular functions for document summarization. In Human Language Technologies, 2011. [26] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 1978. [27] Patrick Rebeschini and Amin Karbasi. Fast mixing for discrete point processes. In Conference on Learning Theory, 2015. [28] Alistair Sinclair. Improved bounds for mixing rates of markov chains and multicommodity flow. Combinatorics, Probability and Computing, 1992. 9 | 2015 | 42 |
5,931 | A class of network models recoverable by spectral clustering Yali Wan Department of Statistics University of Washington Seattle, WA 98195-4322, USA yaliwan@washington.edu Marina Meil˘a Department of Statistics University of Washington Seattle, WA 98195-4322, USA mmp@stat.washington.edu Abstract Finding communities in networks is a problem that remains difficult, in spite of the amount of attention it has recently received. The Stochastic Block-Model (SBM) is a generative model for graphs with “communities” for which, because of its simplicity, the theoretical understanding has advanced fast in recent years. In particular, there have been various results showing that simple versions of spectral clustering using the Normalized Laplacian of the graph can recover the communities almost perfectly with high probability. Here we show that essentially the same algorithm used for the SBM and for its extension called Degree-Corrected SBM, works on a wider class of Block-Models, which we call Preference Frame Models, with essentially the same guarantees. Moreover, the parametrization we introduce clearly exhibits the free parameters needed to specify this class of models, and results in bounds that expose with more clarity the parameters that control the recovery error in this model class. 1 Introduction There have been many recent advances in the recovery of communities in networks, under “blockmodel” assumptions [19, 18, 9]. In particular, advances in recovering communities by spectral clustering algorithms. These have been extended to models including node-specific propensities. In this paper, we argue that one can further expand the model class for which recovery by spectral clustering is possible, and describe a model that subsumes a number of existing models, which we call the PFM. We show that under the PFM model, the communities can be recovered with small error, w.h.p. Our results correspond to what [6] termed the “weak recovery” regime, in which w.h.p. the fraction of nodes that are mislabeled is o(1) when n →∞. 2 The Preference Frame Model of graphs with communities This model embodies the assumption that interactions at the community level (which we will also call macro level) can be quantified by meaningful parameters. This general assumption underlies the (p, q) and the related parameterizations of the SBM as well. We define a preference frame to be a graph with K nodes, one for each community, that encodes the connectivity pattern at the community level by a (non-symmetric) stochastic matrix R. Formally, given [K] = {1, . . . K}, a K × K matrix R (det(R) ̸= 0) representing the transition matrix of a reversible Markov chain on [K], the weighted graph H = ([K], R), with edge set supp R (edges correspond to entries in R not being 0) is called a K-preference frame. Requiring reversibility is equivalent to requiring that there is a set of symmetric weights on the edges from which R can be derived ([17]). We note that without the reversibility assumption, we would be modeling directed graphs, which we will leave for future 1 work. We denote by ρ the left principal eigenvector of R, satisfying ρT R = ρT . W.l.o.g. we can assume the eigenvalue 1 or R has multiplicity 11 and therefore we call ρ the stationary distribution of R. We say that a deterministic weighted graph G = (V, S) with weight matrix S (and edge set supp S) admits a K-preference frame H = ([K], R) if and only if there exists a partition C of the nodes V into K clusters C = {C1, . . . Ck} of sizes n1, . . . , nK, respectively, so that the Markov chain on V with transition matrix P determined by S satisfies the linear constraints X j∈Cm Pij = Rlm for all i ∈Cl, and all cluster indices l, m ∈{1, 2, . . . k}. (1) The matrix P is obtained from S by the standard row-normalization P = D−1S where D = diag{d1:n}, di = Pn i=1 Sij. A random graph family over node set V admits a K-preference frame H, and is called a Preference Frame Model (PFM), if the edges i, j, i < j are sampled independently from Bernoulli distributions with parameters Sij. It is assumed that the edges obtained are undirected and that Sij ≤1 for all pairs i ̸= j. We denote a realization from this process by A. Furthermore, let ˆdi = P j∈V Aij and in general, throughout this paper, we will denote computable quantities derived from the observed A with the same letter as their model counterparts, decorated with the “hat” symbol. Thus, ˆD = diag ˆd1:n, ˆP = ˆD−1A, and so on. One question we will study is under what conditions the PFM model can be estimated from a given A by a standard spectral clustering algorithms. Evidently, the difficult part in this estimation problem is recovering the partition C. If this is obtained correctly, the remaining parameters are easily estimated in a Maximum Likelihood framework. But another question we elucidate refers to the parametrization itself. It is known that in the SBM and Degree Corrected-SBM (DC-SBM) [18], in spite of their simplicity, there are dependencies between the community level “intensive” parameters and the graph level “extensive”parameters, as we will show below. In the parametrization of the PFM , we can explicitly show which are the free parameters and which are the dependent ones. Several network models in wide use admit a preference frame. For example, the SBM(B) model, which we briefly describe here. This model has parameters the cluster sizes (n1:K) and the connectivity matrix B ∈[0, 1]K×K. For two nodes i, j ∈V, the probability of an edge (i, j) is Bkl iff i ∈Ck and j ∈Cl. The matrix B needs not be symmetric. When Bkk = p, Bkl = q for k, l ∈[K], k ̸= l, the model is denoted SBM(p, q). It is easy to verify that the SBM admits a preference frame. For instance, in the case of SBM(p, q), we have di = p(nl −1) + q(n −nl) ≡dCl, for i ∈Cl, Rl,m = qnm dCl if l ̸= m, Rl,l = p(nl −1) dCl , for l, m ∈{1, 2, . . . , k}. In the above we have introduced the notation dCl = P j∈Cl di. One particular realization of the PFM is the Homogeneous K-Preference Frame model (HPFM). In a HPFM, each node i ∈V is characterized by a weight, or propensity to form ties wi. For each pair of communities l, m with l ≤m and for each i ∈Cl, j ∈Cm we sample Aij with probability Sij given by Sij = Rmlwiwj ρl . (2) This formulation ensures detail balance in the edge expectations, i.e. Sij = Sji. The HPFM is virtually equivalent to what is known as the “degree model” [8] or “DC-SBM”, up to a reparameterization2. Proposition 1 relates the node weights to the expected node degrees di. We note that the main result we prove in this paper uses independent sampling of edges only to prove the concentration of the laplacian matrix. The PFM model can be easily extended to other graph models 1Otherwise the networks obtained would be disconnected. 2Here we follow the customary definition of this model, which does not enforce Sii = 0, even though this implies a non-zero probability of self-loops. 2 with dependent edges if one could prove concentration and eigenvalue separation. For example, when R has rational entries, the subgraph induced by each block of A can be represented by a random d-regular graph with a specified degree. Proposition 1 In a HPFM di = wi PK l=1 Rkl wCl ρl whenever i ∈Ck and k ∈[K]. Equivalent statements that the expected degrees in each cluster are proportional to the weights exist in [7, 19] and they are instrumental in analyzing this model. This particular parametrization immediately implies in what case the degrees are globally proportional to the weights. This is, obviously, the situation when wCl ∝ρl for all l ∈[K]. As we see, the node degrees in a HPFM are not directly determined by the propensities wi, but depend on those by a multiplicative constant that varies with the cluster. This type of interaction between parameters has been observed in practically all extensions of the Stochastic Block-Model that we are aware of, making parameter interpretation more difficult. Our following result establishes what are the free parameters of the PFM and of their subclasses. As it will turn out, these parameters and their interactions are easily interpretable. Proposition 2 Let (n1, . . . nK) be a partition of n (assumed to represent the cluster sizes of C = {C1, . . . CK} a partition of node set V), R a non-singular K × K stochastic matrix, ρ its left principal eigenvector, and πC1 ∈[0, 1]n1, . . . πCK ∈[0, 1]nK probability distributions over C1:K. Then, there exists a PFM consistent with H = ([K], R), with clustering C, and whose node degrees are given by di = dtotρkπCk,i, (3) whenever i ∈Ck, where dtot = P i∈V di is a user parameter which is only restricted above by Assumption 2. The proof of this result is constructive, and can be found in the extended version. The parametrization shows to what extent one can specify independently the degree distribution of a network model, and the connectivity parameters R. Moreover, it describes the pattern of connection of a node i as a composition of a macro-level pattern, which gives the total probability of i to form connections with a cluster l, and the micro-level distribution of connections between i and the members of Cl. These parameters are meaningful on their own and can be specified or estimated separately, as they have no hidden dependence on each other or on n, K. The PFM enjoys a number of other interesting properties. As this paper will show, almost all the properties that make SBM’s popular and easy to understand hold also for the much more flexible PFM. In the remainder of this paper we derive recovery guarantees for the PFM. As an additional goal, we will show that in the frame we set with the PFM, the recovery conditions become clearer, more interpretable, and occasionally less restrictive than for other models. As already mentioned, the PFM includes many models that have been found useful by previous authors. Yet, the PFM class is much more flexible than those individual models, in the sense that it allows other unexplored degrees of freedom (or, in other words, achieves the same advantages as previously studied models with fewer constraints on the data). Note that there is an infinite number of possible random graphs G with the same parameters (d1:n, n1:k, R) satisfying the constraints (1) and Proposition 2, yet for reliable community detection we do not need to control S fully, but only aggregate statistics like P j∈C Aij. 3 Spectral clustering algorithm and main result Now, we address the community recovery problem from a random graph (V, A) sampled from the PFM defined as above. We make the standard assumption that K is known. Our analysis is 3 based on a very common spectral clustering algorithm used in [13] and described also in [14, 21]. Input : Graph (V, A) with |V| = n and A ∈{0, 1}n×n, number of clusters K Output: Clustering C 1. Compute ˆD = diag( ˆd1, · · · , ˆdn) and Laplacian ˆL = ˆD−1/2A ˆD−1/2 (4) 2. Calculate the K eigenvectors ˆY1, · · · , ˆYK associated with the K eigenvalues |ˆλ1| ≥· · · ≥|ˆλK| of ˆL. Normalize the eigenvectors to unit length. We denote them as the first K eigenvectors in the following text; 3. Set ˆVi = ˆD−1/2 ˆYi, i = 1, · · · , K. Form matrix ˆV = [ ˆV1 · · · ˆVK]; 4. Treating each row of ˆV as a point in K dimensions, cluster them by the K-means algorithm to obtain the clustering ˆC. Algorithm 1: Spectral Clustering Note that the vectors ˆV are the first K eigenvectors of P. The K-means algorithm is assumed to find the global optimum. For more details on good initializations for K-means in step 4 see [16]. We quantify the difference between ˆC and the true clusterings C by the mis-clustering rate perr, which is defined as perr = 1 −1 n max φ:[K]→[K] X k |Cφ(k) ∩ˆCk|. (5) Theorem 3 (Mis-clustering rate bound for HPFM and PFM) Let the n × n matrix S admit a PFM, and w1:n, R, ρ, P, A, d1:n have the usual meaning, and let λ1:n be the eigenvalues of P, with |λi| ≥|λi+1|. Let dmin = min d1:n be the minimum expected degree, ˆdmin = min ˆdi, and dmax = maxij nSij. Let γ ≥1, η > 0 be arbitrary numbers. Assume: Assumption 1 S admits a HPFM model and (2) holds. Assumption 2 Sij ≤1 Assumption 3 ˆdmin ≥log(n) Assumption 4 dmin ≥log(n) Assumption 5 ∃κ > 0, dmax ≤κ log n Assumption 6 grow > 0, where grow is defined in Proposition 4. Assumption 7 λ1:K are the eigenvalues of R, and |λK| −|λK+1| = σ > 0. We also assume that we run Algorithm 1 on S and that K-means finds the optimal solution. Then, for n sufficiently large, the following statements hold with probability at least 1 −e−γ. PFM Assumptions 2 - 7 imply perr ≤ Kdtot ndmingrow C0γ4 σ2 log n + 4(log n)η ˆdmin (6) HPFM Assumptions 1 - 6 imply perr ≤ Kdtot ndmingrow C0γ4 λ2 K log n + 4(log n)η ˆdmin (7) where C0 is a constant depending on κ and γ. Note that perr decreases at least as 1/ log(n) when ˆdmin = dmin = log(n). This is because ˆdmin and dmin help with the concentration of L. Using Proposition 4, the distances between rows of V , 4 i.e, the true centers of the k-means step, are lower bounded by grow/dtot. After plugging in the assumptions for dmin, ˆdmin, dmax, we obtain perr ≤Kκ grow C0γ4 σ2 log n + 4 (log n)(1−η) . (8) When n is small, the first component on the right hand side dominates because of the constant C0, while the second part dominates when n is very large. This shows that perr decreases almost as 1/ log n. Of the remaining quantities, κ controls the spread of the degrees di. Notice that λK and σ are eigengaps in HPFM model and PFM model respectively and depend only on the preference frame, and likewise for grow. The eigengaps ensure the stability of principal spaces and the separation from the spurious eigenvalues, as shown in Proposition 6. The term containing (log n)η is designed to control the difference between di and ˆdi with η a small positive constant. 3.1 Proof outline, techniques and main concepts The proof of Theorem 3 (given in the extended version of the paper) relies on three steps, which are to be found in most results dealing with spectral clustering. First, concentration bounds of the empirical Laplacian ˆL w.r.t L are obtained. There are various conditions under which these can be obtained, and ours are most similar to the recent result of [9]. The other tools we use are Hoeffding bounds and tools from linear algebra. Second, one needs to bound the perturbation of the eigenvectors Y as a function of the perturbation in L. This is based on the pivotal results of Davis and Kahan, see e.g [18]. A crucial ingredient in these type of theorems is the size of the eigengap between the invariant subspace Y and its orthogonal complement. This is a condition that is model-dependent, and therefore we discuss the techniques we introduce for solving this problem in the PFM in the next subsection. The third step is to bound the error of the K-means clustering algorithm. This is done by a counting argument. The crux of this step is to ensure the separation of the K distinct rows of V . This, again, is model dependent and we present our result below. The details and proof are in the extended version. All proofs are for the PFM; to specialize to the HPFM, one replaces σ with |λK| 3.2 Cluster separation and bounding the spurious eigenvalues in the PFM Proposition 4 (Cluster separation) Let V, ρ, d1:n have the usual meaning and define the cluster volume dCk = P i∈Ck di, and cmax, cmin as maxk, mink dCk nρk . Let i, j ∈V be nodes belonging respectively to clusters k, m with k ̸= m. Then, ||Vi: −Vj:||2 ≥ 1 dtot 1 cmax 1 ρk + 1 ρm − 1 √ρkρm 1 cmin − 1 cmax = grow dtot , (9) where grow = h 1 cmax 1 ρk + 1 ρm − 1 √ρkρm 1 cmin − 1 cmax i . Moreover, if the columns of V are normalized to length 1, the above result holds by replacing dtotcmax,min with max, mink nk ρk . In the square brackets, cmax,min depend on the cluster-level degree distribution, while all the other quantitities depend only of the preference frame. Hence, this expression is invariant with n, and as long as it is strictly positive, we have that the cluster separation is Ω(1/dtot). The next theorem is crucial in proving that L has a constant eigengap. We express the eigengap of P in terms of the preference frame H and the mixing inside each of the clusters Ck. For this, we resort to generalized stochastic matrices, i.e. rectangular positive matrices with equal row sums, and we relate their properties to the mixing of Markov chains on bipartite graphs. These tools are introduced here, for the sake of intuition, toghether with the main spectral result, while the rest of the proofs are in the extended version. Given C, for any vector x ∈Rn, we denote by xk, k = 1, . . . K, the block of x indexed by elements of cluster k of C. Similarly, for any square matrix A ∈Rn×n, we denote by Akl = [Aij]i∈k,j∈l the block with rows indexed by i ∈k, and columns indexed by j ∈l. 5 Denote by ρ, λ1:K, ν1:K ∈RK respectively the stationary distribution, eigenvalues3, and eigenvectors of R. We are interested in block stochastic matrices P for which the eigenvalues of R are the principal eigenvalues. We call λK+1 . . . λn spurious eigenvalues. Theorem 6 below is a sufficient condition that bounds |λK+1| whenever each of the K2 blocks of P is ”homogeneous” in a sense that will be defined below. When we consider the matrix L = D−1/2SD−1/2 partitioned according to C, it will be convenient to consider the off-diagonal blocks in pairs. This is why the next result describes the properties of matrices consisting of a pair of off-diagonal blocks. Proposition 5 (Eigenvalues for the off-diagonal blocks) Let M be the square matrix M = 0 B A 0 (10) where A ∈Rn2×n1 and B ∈Rn1×n2, and let x = x1 x2 , x1,2 ∈Cn1,2 be an eigenvector of M with eigenvalue λ. Then Bx2 = λx1 ABx2 = λ2x2 (11) Ax1 = λx2 BAx1 = λ2x1 (12) M 2 = BA 0 0 AB (13) Moreover, if M is symmetric, i.e B = AT , then λ is a singular value of A, x is real, and −λ is also an eigenvalue of M with eigenvector [xT 1 −xT 2 ]T . Assuming n2 ≤n1, and that A is full rank, one can write A = V ΛU T with V ∈Rn2×n2, U ∈Rn1×n2 orthogonal matrices, and Λ a diagonal matrix of non-zero singular values. Theorem 6 (Bounding the spurious eigenvalues of L) Let C, L, P, D, S, R, ρ be defined as above, and let λ be an eigenvalue of P. Assume that (1) P is block-stochastic with respect to C; (2) λ1:K are the eigenvalues of R, and |λK| > 0; (3) λ is not an eigenvalue of R; (4) denote by λkl 3 (λkk 2 ) the third (second) largest in magnitude eigenvalue of block Mkl (Lkk) and assume that |λkl 3 | λmax(Mkl) ≤c < 1 ( |λkk 2 | λmax(Lkk) ≤c). Then, the spurious eigenvalues of P are bounded by c times a constant that depends only on R. |λ| ≤c max k=1:K rkk + X l̸=k √rklrlk (14) Remarks: The factor that multiplies c can be further bounded denoting a = [√rkl]T l=1:K, b = [√rlk]T l=1:K rkk + X l̸=k √rklrlk = aT b ≤||a||||b|| = v u u t K X l=1 rkl K X l=1 rlk = v u u t K X l=1 rlk (15) In other words, |λ| ≤c 2 max k=1:K v u u t K X l=1 rlk (16) The maximum column sum of a stochastic matrix is 1 if the matrix is doubly stochastic and larger than 1 otherwise, and can be as large as √ K. However, one must remember that the interesting R matrices have “large” eigenvalues. In particular we will be interested in λK > c. It is expected that under these conditions, the factor depending on R to be close to 1. 3Here too, eigenvalues will always be ordered in decreasing order of their magnitudes, with positive values preceeding negatives one of the same magnitude. Consequently, for any stochastic matrix, λ1 = 1 always 6 The second remark is on the condition (3), that all blocks have small spurious eigenvalues. This condition is not merely a technical convenience. If a block had a large eigenvalue, near 1 or −1 (times its λmax), then that block could itself be broken into two distinct clusters. In other words, the clustering C would not accurately capture the cluster structure of the matrix P. Hence, condition (3) amounts to requiring that no other cluster structure is present, in other words that within each block, the Markov chain induced by P mixes well. 4 Related work Previous results we used The Laplacian concentration results use a technique introduced recently by [9], and some of the basic matrix theoretic results are based on [14] which studied the P and L matrix in the context of spectral clustering. As any of the many works we cite, we are indebted to the pioneering work on the perturbation of invariant subspaces of Davis and Kahan [18, 19, 20]. 4.1 Previous related models The configuration model for regular random graphs [4, 11] and for graphs with general fixed degrees [10, 12] is very well known. It can be shown by a simple calculation that the configuration model also admits a K-preference frame. In the particular case when the diagonal of the R matrix is 0 and the connections between clusters are given by a bipartite configuration model with fixed degrees, K-preference frames have been studied by [15] under the name “equitable graphs”; the object there was to provide a way to calculate the spectrum of the graph. Since the PFM is itself an extension of the SBM, many other extensions of the latter will bear resemblance to PFM. Here we review only a subset of these, a series of strong relatively recent advances, which exploit the spectral properties of the SBM and extend this to handle a large range of degree distributions [7, 19, 5]. The PFM includes each of these models as a subclass4. In [7] the authors study a model that coincides (up to some multiplicative constants) with the HPFM. The paper introduces an elegant algorithm that achieves partial recovery or better, which is based on the spectral properties of a random Laplacian-like matrix, and does not require knowledge of the partition size K. The PFM also coincides with the model of [1] and [8] called the expected degree model w.r.t the distribution of intra-cluster edges, but not w.r.t the ambient edges, so the HPFM is a subclass of this model. A different approach to recovery The papers [5, 18, 9] propose regularizing the normalized Laplacian with respect to the influence of low degrees, by adding the scaled unit matrix τI to the incidence matrix A, and thereby they achieve recovery for much more imbalanced degree distributions than us. Currently, we do not see an application of this interesting technique to the PFM, as the diagonal regularization destroys the separation of the intracluster and intercluster transitions, which guarantee the clustering property of the eigenvectors. Therefore, currently we cannot break the n log n limit into the ultra-sparse regime, although we recognize that this is an important current direction of research. Recovery results like ours can be easily extended to weighted, non-random graphs, and in this sense they are relevant to the spectral clustering of these graphs, when they are assumed to be noisy versions of a G that admits a PFM. 4.2 An empirical comparison of the recovery conditions As obtaining general results in comparing the various recovery conditions in the literature would be a tedious task, here we undertake to do a numerical comparison. While the conclusions drawn from this are not universal, they illustrate well the stringency of various conditions, as well as the gap between theory and actual recovery. For this, we construct HPFM models, and verify numerically if they satisfy the various conditions. We have also clustered random graphs sampled from this model, with good results (shown in the extended version). 4In particular, the models proposed in [7, 19, 5] are variations of the DC-SBM and thus forms of the homogeneous PFM. 7 We generate S from the HPFM model with K = 5, n = 5000. Each wi is uniformly generated from (0.5, 1). n1:K = (500, 1000, 1500, 1000, 1000), grow > 0, λ1:K = (1, 0.8, 0.6, 0.4, 0.2). The matrix R is given below; note its last row in which r55 < P4 l=1 r5l. R = .80 .07 .02 .02 .09 .04 .52 .24 .12 .08 .01 .20 .65 .15 .00 .01 .08 .12 .70 .08 .13 .21 .02 .32 .33 ρ = (.25, .44, .54, .65, .17). (17) The conditions we are verifying include besides ours, those obtained by [18], [19], [3] and [5]; since the original S is a perfect case for spectral clustering of weighted graphs, we also verify the theoretical recovery conditions for spectral clustering in [2] and [16]. Our result Theorem 3 Assumption 1 and 2 automatically hold from the construction of the data. By simulating the data, We find that dmin = 77.4, ˆdmin = 63, both of which are bigger than log n = 8.52. Therefore Assumption 3 and 4 hold. dmax = 509.3, grow = 1.82 > 0, thus Assumption 5 and 6 hold. After running Algorithm 1, the mis-clustering rate is r = 0.0008, which satisfies the theoretical bound. In conclusion, the dataset fits into both the assumptions and conclusion of Theorem 3. Qin and Rohe[18] This paper has an assumption on the lower bound on λK, that is 1 8 √ 3λK ≥ q K(ln(K/ϵ) dmin , so that the concentration bound holds with probability (1 −ϵ). We set ϵ = 0.1 and obtain λK ≥12.3, which is impossible to hold since λK is upper bounded by 15. Rohe, Chatterjee, Yu[19] Here, one defines τn = dmin n , and requires τ 2 n log n > 2 to ensure the concentration of L. To meet this assumption, with n = 5000, dmin ≥2422. While in our case dmin = 77.4. The assumption requires a very dense graph and is not satisfied in this dataset. Balcan, Borgs Braverman, Chayes[3]Their theorem is based on self-determined community structure. It requires all the nodes to be more connected within their own cluster. However, in our graph, 1296 out of 5000 nodes have more connections to outside nodes than to nodes in their own cluster. Ng, Jordan, Weiss[16] require λ2 < 1 −δ, where δ > (2 + 2 √ 2)ϵ, ϵ = p K(K −1)ϵ1 + Kϵ2 2, ϵ1 ≥maxi1,i2∈{1,··· ,K} P j∈Ci1 P k∈Ci2 A2 jk ˆdj ˆdk , ϵ2 ≥maxi∈{1,··· ,K} P k:k∈Si ˆdj (P k,l∈Si A2 kl ˆdk ˆdl )1/2. On the given data, we find that ϵ ≥36.69, and δ ≥125.28, which is impossible to hold since δ needs to be smaller than 1. Chaudhuri, Chung, Tsiatas[5] The recovery theorem of this paper requires di ≥128 9 ln(6n/δ), so that when all the assumptions hold, it recovers the clustering correctly with probability at least 1 −6δ. We set δ = 0.01, and obtain that di = 77.40, 128 9 ln(6n/δ) = 212.11. Therefore the assumption fails as well. For our method, the hardest condition to satisfy, and the most different from the others, was Assumption 6. We repeated this experiment with the other weights distributions for which this assumption fails. The assumptions in the related papers continued to be violated. In [Qin and Rohe], we obtain λK ≥17.32. In [Rohe, Chatterjee, Yu], we still needs dmin ≥2422. In [Balcan, Borgs Braverman, Chayes], we get 1609 points more connected to the outside nodes of its cluster. In [Balakrishnan, Xu, Krishnamurthy, Singh], we get σ = 0.172 and needs to satisfy σ = o(0.3292). In [Ng, Jordan, Weiss], we obtain δ ≥175.35. Therefore, the assumptions in these papers are all violated as well. 5 Conclusion In this paper, we have introduced the preference frame model, which is more flexible and subsumes many current models including SBM and DC-SBM. It produces state-of-the art recovery rates comparable to existing models. To accomplish this, we used a parametrization that is clearer and more intuitive. The theoretical results are based on the new geometric techniques which control the eigengaps of the matrices with piecewise constant eigenvectors. We note that the main result theorem 3 uses independent sampling of edges only to prove the concentration of the laplacian matrix. The PFM model can be easily extended to other graph models with dependent edges if one could prove concentration and eigenvalue separation. For example, when R has rational entries, the subgraph induced by each block of A can be represented by a random d-regular graph with a specified degree. 5To make λ ≤1 possible, one needs dmin ≥11718. 8 References [1] Sanjeev Arora, Rong Ge, Sushant Sachdeva, and Grant Schoenebeck. Finding overlapping communities in social networks: toward a rigorous approach. In Proceedings of the 13th ACM Conference on Electronic Commerce, pages 37–54. ACM, 2012. [2] Sivaraman Balakrishnan, Min Xu, Akshay Krishnamurthy, and Aarti Singh. Noise thresholds for spectral clustering. In Advances in Neural Information Processing Systems, pages 954–962, 2011. [3] Maria-Florina Balcan, Christian Borgs, Mark Braverman, Jennifer Chayes, and Shang-Hua Teng. Finding endogenously formed communities. arxiv preprint arXiv:1201.4899v2, 2012. [4] Bela Bollobas. Random Graphs. Cambridge University Press, second edition, 2001. [5] K. Chaudhuri, F. Chung, and A. Tsiatas. Spectral clustering of graphs with general degrees in extended planted partition model. Journal of Machine Learning Research, pages 1–23, 2012. [6] Yudong Chen and Jiaming Xu. Statistical-computational tradeoffs in planted problems and submatrix localization with a growing number of clusters and submatrices. arXiv preprint arXiv:1402.1267, 2014. [7] Amin Coja-Oghlan and Andre Lanka. Finding planted partitions in random graphs with general degree distributions. SIAM Journal on Discrete Mathematics, 23:1682–1714, 2009. [8] M. O. Jackson. Social and Economic Networks. Princeton University Press, 2008. [9] Can M. Le and Roman Vershynin. Concentration and regularization of random graphs. 2015. [10] Brendan McKay. Asymptotics for symmetric 0-1 matrices with prescribed row sums. Ars Combinatoria, 19A:15–26, 1985. [11] Brendan McKay and Nicholas Wormald. Uniform generation of random regular graphs of moderate degree. Journal of Algorithms, 11:52–67, 1990. [12] Brendan McKay and Nicholas Wormald. Asymptotic enumeration by degree sequence of graphs with degrees o(n1/2. Combinatorica, 11(4):369–382, 1991. [13] Marina Meil˘a and Jianbo Shi. Learning segmentation by random walks. In T. K. Leen, T. G. Dietterich, and V. Tresp, editors, Advances in Neural Information Processing Systems, volume 13, pages 873–879, Cambridge, MA, 2001. MIT Press. [14] Marina Meil˘a and Jianbo Shi. A random walks view of spectral segmentation. In T. Jaakkola and T. Richardson, editors, Artificial Intelligence and Statistics AISTATS, 2001. [15] M.E.J. Newman and Travis Martin. Equitable random graphs. 2014. [16] Andrew Y Ng, Michael I Jordan, Yair Weiss, et al. On spectral clustering: Analysis and an algorithm. Advances in neural information processing systems, 2:849–856, 2002. [17] J.R. Norris. Markov Chains. Cambridge University Press, 1997. [18] Tai Qin and Karl Rohe. Regularized spectral clustering under the degree-corrected stochastic blockmodel. In Advances in Neural Information Processing Systems, pages 3120–3128, 2013. [19] Karl Rohe, Sourav Chatterjee, Bin Yu, et al. Spectral clustering and the high-dimensional stochastic blockmodel. The Annals of Statistics, 39(4):1878–1915, 2011. [20] Gilbert W Stewart, Ji-guang Sun, and Harcourt Brace Jovanovich. Matrix perturbation theory, volume 175. Academic press New York, 1990. [21] Ulrike Von Luxburg. A tutorial on spectral clustering. Statistics and computing, 17(4):395– 416, 2007. 9 | 2015 | 43 |
5,932 | Closed-form Estimators for High-dimensional Generalized Linear Models Eunho Yang IBM T.J. Watson Research Center eunhyang@us.ibm.com Aur´elie C. Lozano IBM T.J. Watson Research Center aclozano@us.ibm.com Pradeep Ravikumar University of Texas at Austin pradeepr@cs.utexas.edu Abstract We propose a class of closed-form estimators for GLMs under high-dimensional sampling regimes. Our class of estimators is based on deriving closed-form variants of the vanilla unregularized MLE but which are (a) well-defined even under high-dimensional settings, and (b) available in closed-form. We then perform thresholding operations on this MLE variant to obtain our class of estimators. We derive a unified statistical analysis of our class of estimators, and show that it enjoys strong statistical guarantees in both parameter error as well as variable selection, that surprisingly match those of the more complex regularized GLM MLEs, even while our closed-form estimators are computationally much simpler. We derive instantiations of our class of closed-form estimators, as well as corollaries of our general theorem, for the special cases of logistic, exponential and Poisson regression models. We corroborate the surprising statistical and computational performance of our class of estimators via extensive simulations. 1 Introduction We consider the estimation of generalized linear models (GLMs) [1], under high-dimensional settings where the number of variables p may greatly exceed the number of observations n. GLMs are a very general class of statistical models for the conditional distribution of a response variable given a covariate vector, where the form of the conditional distribution is specified by any exponential family distribution. Popular instances of GLMs include logistic regression, which is widely used for binary classification, as well as Poisson regression, which together with logistic regression, is widely used in key tasks in genomics, such as classifying the status of patients based on genotype data [2] and identifying genes that are predictive of survival [3], among others. Recently, GLMs have also been used as a key tool in the construction of graphical models [4]. Overall, GLMs have proven very useful in many modern applications involving prediction with high-dimensional data. Accordingly, an important problem is the estimation of such GLMs under high-dimensional sampling regimes. Under such sampling regimes, it is now well-known that consistent estimators cannot be obtained unless low-dimensional structural constraints are imposed upon the underlying regression model parameter vector. Popular structural constraints include that of sparsity, which encourages parameter vectors supported with very few non-zero entries, group-sparse constraints, and low-rank structure with matrix-structured parameters, among others. Several lines of work have focused on consistent estimators for such structurally constrained high-dimensional GLMs. A popular instance, for the case of sparsity-structured GLMs, is the `1 regularized maximum likelihood estimator (MLE), which has been shown to have strong theoretical guarantees, ranging from risk 1 consistency [5], consistency in the `1 and `2-norm [6, 7, 8], and model selection consistency [9]. Another popular instance is the `1/`q (for q ≥2) regularized MLE for group-sparse-structured logistic regression, for which prediction consistency has been established [10]. All of these estimators solve general non-linear convex programs involving non-smooth components due to regularization. While a strong line of research has developed computationally efficient optimization methods for solving these programs, these methods are iterative and their computational complexity scales polynomially with the number of variables and samples [10, 11, 12, 13], making them expensive for very large-scale problems. A key reason for the popularity of these iterative methods is that while the number of iterations are some function of the required accuracy, each iteration itself consists of a small finite number of steps, and can thus scale to very large problems. But what if we could construct estimators that overall require only a very small finite number of steps, akin to a single iteration of popular iterative optimization methods? The computational gains of such an approach would require that the steps themselves be suitably constrained, and moreover that the steps could be suitably profiled and optimized (e.g. efficient linear algebra routines implemented in BLAS libraries), a systematic study of which we defer to future work. We are motivated on the other hand by the simplicity of such a potential class of “closed-form” estimators. In this paper, we thus address the following question: “Is it possible to obtain closed-form estimators for GLMs under high-dimensional settings, that nonetheless have the sharp convergence rates of the regularized convex programs and other estimators noted above?” This question was first considered for linear regression models [14], and was answered in the affirmative. Our goal is to see whether a positive response can be provided for the more complex statistical model class of GLMs as well. In this paper we focus specifically on the class of sparse-structured GLMs, though our framework should extend to more general structures as well. As an inkling of why closed-form estimators for high-dimensional GLMs is much trickier than that for high-dimensional linear models is that under small-sample settings, linear regression models do have a statistically efficient closed-form estimator — the ordinary least-squares (OLS) estimator, which also serves as the MLE under Gaussian noise. For GLMs on the other hand, even under small-sample settings, we do not yet have statistically efficient closed-form estimators. A classical algorithm to solve for the MLE of logistic regression models for instance is the iteratively reweighted least squares (IRLS) algorithm, which as its name suggests, is iterative and not available in closedform. Indeed, as we show in the sequel, developing our class of estimators for GLMs requires far more advanced mathematical machinery (moment polytopes, and projections onto an interior subset of these polytopes for instance) than the linear regression case. Our starting point to devise a closed-form estimator for GLMs is to nonetheless revisit this classical unregularized MLE estimator for GLMs from a statistical viewpoint, and investigate the reasons why the estimator fails or is even ill-defined in the high-dimensional setting. These insights enable us to propose variants of the MLE that are not only well-defined but can also be easily computed in analytic-form. We provide a unified statistical analysis for our class of closed-form GLM estimators, and instantiate our theoretical results for the specific cases of logistic, exponential, and Poisson regressions. Surprisingly, our results indicate that our estimators have comparable statistical guarantees to the regularized MLEs, in terms of both variable selection and parameter estimation error, which we also corroborate via extensive simulations (which surprisingly even show a slight statistical performance edge for our closed-form estimators). Moreover, our closed-form estimators are much simpler and competitive computationally, as is corroborated by our extensive simulations. With respect to the conditions we impose on the GLM models, we require that the population covariance matrix of our covariates be weakly sparse, which is a different condition than those typically imposed for regularized MLE estimators; we discuss this further in Section 3.2. Overall, we hope our simple class of statistically as well as computationally efficient closed-form estimators for GLMs would open up the use of GLMs in large-scale machine learning applications even to lay users on the one hand, and on the other hand, encourage the development of new classes of “simple” estimators with strong statistical guarantees extending the initial proposals in this paper. 2 2 Setup We consider the class of generalized linear models (GLMs), where a response variable y 2 Y, conditioned on a covariate vector x 2 Rp, follows an exponential family distribution: P(y|x; ✓⇤) = exp ⇢h(y) + yh✓⇤, xi −A " h✓⇤, xi # c(σ) $ (1) where σ 2 R > 0 is fixed and known scale parameter, ✓⇤2 Rp is the GLM parameter of interest, and A(h✓⇤, xi) is the log-partition function or the log-normalization constant of the distribution. Our goal is to estimate the GLM parameter ✓⇤given n i.i.d. samples % (x(i), y(i)) n i=1. By properties of exponential families, the conditional moment of the response given the covariates can be written as µ(h✓⇤, xi) ⌘E(y|x; ✓⇤) = A0(h✓⇤, xi). Examples. Popular instances of (1) include the standard linear regression model, the logistic regression model, and the Poisson regression model, among others. In the case of the linear regression model, we have a response variable y 2 R, with the conditional distribution P(y|x, ✓⇤): exp n −y2/2+yh✓⇤,xi−h✓⇤,xi2/2 σ2 o , where the log-partition function (or log-normalization constant) A(a) of (1) in this specific case is given by A(a) = a2/2. Another popular GLM instance is the logistic regression model P(y|x, ✓⇤), for a categorical output variable y 2 Y ⌘{−1, 1}, exp % yh✓⇤, xi −log ⇥ exp(−h✓⇤, xi) + exp(h✓⇤, xi) ⇤ where the log-partition function A(a) = log " exp(−a) + exp(a) # . The exponential regression model P(y|x, ✓⇤) in turn is given by: exp % yh✓⇤, xi + log " −h✓⇤, xi # . Here, the domain of response variable Y = R+ is the set of non-negative real numbers (it is typically used to model time intervals between events for instance), and the log-partition function A(a) = −log(−a). Our final example is the Poisson regression model, P(y|x, ✓⇤): exp % −log(y!) + yh✓⇤, xi −exp " h✓⇤, xi # where the response variable is count-valued with domain Y ⌘{0, 1, 2, ...}, and with log-partition function A(a) = exp(a). Any exponential family distribution can be used to derive a canonical GLM regression model (1) of a response y conditioned on covariates x, by setting the canonical parameter of the exponential family distribution to h✓⇤, xi. For the parameterization to be valid, the conditional density should be normalizable, so that A " h✓⇤, xi # < +1. High-dimensional Estimation Suppose that we are given n covariate vectors, x(i) 2 Rp, drawn i.i.d. from some distribution, and corresponding response variables, y(i) 2 Y, drawn from the distribution P(y|x(i), ✓⇤) in (1). A key goal in statistical estimation is to estimate the parameters ✓⇤2 Rp, given just the samples % (x(i), y(i)) n i=1. Such estimation becomes particularly challenging in a high-dimensional regime, where the dimension of covariate vector p is potentially even larger than the number of samples n. In such high-dimensional regimes, it is well understood that structural constraints on ✓⇤are necessary in order to find consistent estimators. In this paper, we focus on the structural constraint of element-wise sparsity, so that the number of non-zero elements in ✓⇤is less than or equal to some value k much smaller than p: k✓⇤k0 k. Estimators: Regularized Convex Programs The `1 norm is known to encourage the estimation of such sparse-structured parameters ✓⇤. Accordingly, a popular class of M-estimators for sparse-structured GLM parameters is the `1 regularized maximum log-likelihood estimator for (1). Given n samples % (x(i), y(i)) n i=1 from P(y|x, ✓⇤), the `1 regularized MLEs can be written as: minimize ✓ % − ⌦ ✓, 1 n Pn i=1 y(i)x(i)↵ + 1 n Pn i=1 A " h✓, x(i)i # + λnk✓k1 . For notational simplicity, we collate the n observations in vector and matrix forms where we overload the notation y 2 Rn to denote the vector of n responses so that i-th element of y, yi, is y(i), and X 2 Rn⇥p to denote the design matrix whose i-th row is [x(i)]>. With this notation we can rewrite optimization problem characterizing the `1-regularized MLE simply as minimize ✓ % −1 n✓>X>y + 1 n1>A(X✓) + λnk✓k1 where we overload the notation A(·) for an input vector ⌘2 Rn to denote A(⌘) ⌘ " A(⌘1), A(⌘2), . . . , A(⌘n) #>, and 1 ⌘(1, . . . , 1)> 2 Rn. 3 3 Closed-form Estimators for High-dimensional GLMs The goal of this paper is to derive a general class of closed-form estimators for high-dimensional GLMs, in contrast to solving huge, non-differentiable `1 regularized optimization problems. Before introducing our class of such closed-form estimators, we first introduce some notation. For any u 2 Rp, we use [Sλ(u)]i = sign(ui) max(|ui| −λ, 0) to denote the element-wise softthresholding operator, with thresholding parameter λ. For any given matrix M 2 Rp⇥p, we denote by T⌫(M) : Rp⇥p 7! Rp⇥p a family of matrix thresholding operators that are defined point-wise, so that they can be written as [T⌫(M)]ij := ⇢⌫(Mij), for any scalar thresholding operator ⇢⌫(·) that satisfies the following conditions: for any input a 2 R, (a) |⇢⌫(a)| |a|, (b) |⇢⌫(a)| = 0 for |a| ⌫, and (c) |⇢⌫(a) −a| ⌫. The standard soft-thresholding and hard-thresholding operators are both pointwise operators that satisfy these properties. See [15] for further discussion of such pointwise matrix thresholding operators. For any ⌘2 Rn, we let rA(⌘) denote the element-wise gradients: rA(⌘) ⌘ " A0(⌘1), A0(⌘2), . . . , A0(⌘n) #>. We assume that the exponential family underlying the GLM is minimal, so that this map is invertible, and so that for any µ 2 Rn in the range of rA(·), we can denote [rA]−1(µ) as an element-wise inverse map of rA(·): " (A0)−1(µ1), (A0)−1(µ2), . . . , (A0)−1(µn) #>. Consider the response moment polytope M := {µ : µ = Ep[y], for some distribution p over y 2 Y}, and let Mo denote the interior of M. Our closed-form estimator will use a carefully selected subset M ✓Mo. (2) Denote the projection of a response variable y 2 Y onto this subset as ⇧¯ M(y) = arg minµ2 ¯ M |y − µ|, where the subset M is selected so that the projection step is always well-defined, and the minimum exists. Given a vector y 2 Yn, we denote the vector of element-wise projections of entries in y as ⇧¯ M(y) so that: [⇧¯ M(y)]i := ⇧¯ M(yi). (3) As the conditions underlying our theorem will make clear, we will need the operator [rA]−1(·) defined above to be both well-defined and Lipschitz in the subset M of the interior of the response moment polytope. In later sections, we will show how to carefully construct such a subset M for different GLM models. We now have the machinery to describe our class of closed-form estimators: b✓Elem = Sλn h T⌫ ⇣X>X n ⌘i−1 X>[rA]−1" ⇧¯ M(y) # n ! , (4) where the various mathematical terms were defined above. It can be immediately seen that the estimator is available in closed-form. In a later section, we will see instantiations of this class of estimators for various specific GLM models, and where we will see that these estimators take very simple forms. Before doing so, we first describe some insights that led to our particular construction of the high-dimensional GLM estimator above. 3.1 Insights Behind Construction of Our Closed-Form Estimator We first revisit the classical unregularized MLE for GLMs: b✓ 2 arg min✓ % −1 n✓>X>y + 1 n1>A(X✓) . Note that this optimization problem does not have a unique minimum in general, especially under high-dimensional sample settings where p > n. Nonetheless, it is instructive to study why this unregularized MLE is either ill-suited or even ill-defined under high-dimensional settings. The stationary condition of unregularized MLE optimization problem can be written as: X>y = X>rA(Xb✓). (5) There are two main caveats to solving for a unique b✓satisfying this stationary condition, which we clarify below. 4 (Mapping to mean parameters) In a high dimensional sampling regime where p ≥n, (5) can be seen to reduce to y = rA(Xb✓) (so long as XT has rank n). This then suggests solving for Xb✓= [rA]−1(y), where we recall the definition of the operator rA(·) in terms of element-wise operations involving A0(·). The caveat however is that A0(·) is only onto the interior Mo of the response moment polytope [16], so that [A0(·)]−1 is well-defined only when given µ 2 Mo. When entries of the sample response vector y however lie outside of Mo, as will typically be the case and which we will illustrate for multiple instances of GLM models in later sections, the inverse mapping would not be well-defined. We thus first project the sample response vector y onto M ✓Mo to obtain ⇧¯ M(y) as defined in (3). Armed with this approximation, we then consider the more amenable ⇧¯ M(y) ⇡rA(Xb✓), instead of the original stationary condition in (5). (Sample covariance) We thus now have the approximate characterization of the MLE as Xb✓⇡ [rA]−1" ⇧¯ M(y) # . This then suggests solving for an approximate MLE b✓via least squares as b✓= [X>X]−1X>[rA]−1" ⇧¯ M(y) # . The high-dimensional regime with p > n poses a caveat here, since the sample covariance matrix (X>X)/n would then be rank-deficient, and hence not invertible. Our approach is to then use a thresholded sample covariance matrix T⌫ " X>X n # defined in the previous subsection instead, which can be shown to be invertible and consistent to the population covariance matrix ⌃with high probability [15, 17]. In particular, recent work [15] has shown that thresholded sample covariance T⌫ " X>X n # is consistent with respect to the spectral norm with convergence rate 555555T⌫ " X>X n # −⌃ 555555 op O ⇣ c0 q log p n ⌘ , under some mild conditions detailed in our main theorem. Plugging in this thresholded sample covariance matrix, to get an approximate least squares solution for the GLM parameters ✓, and then performing soft-thresholding precisely yields our closed-form estimator in (4). Our class of closed-form estimators in (4) can thus be viewed as surgical approximations to the MLE so that it is well-defined in high-dimensional settings, as well as being available in closed-form. But would such an approximation actually yield rigorous consistency guarantees? Surprisingly, as we show in the next section, not only is our class of estimators consistent, but in our corollaries we show that the statistical guarantees are comparable to those of the state of the art iterative ways like regularized MLEs. We note that our class of closed-form estimators in (4) can also be written in an equivalent form that is more amenable to analysis: minimize ✓ k✓k1 (6) s. t 7777✓− h T⌫ ⇣X>X n ⌘i−1 X>[rA]−1" ⇧¯ M(y) # n 7777 1 λn. The equivalence between (4) and (6) easily follows from the fact that the optimization problem (6) is decomposable into independent element-wise sub-problems, and each sub-problem corresponds to soft-thresholding. It can be seen that this form is also amenable to extending the framework in this paper to structures beyond sparsity, by substituting in alternative regularizers. Due to space constraints, the computational complexity is discussed in detail in the Appendix. 3.2 Statistical Guarantees In this subsection, we provide an unified statistical analysis for the class of estimators (4) under the following standard conditions, namely sparse ✓⇤and sub-Gaussian design X: (C1) The parameter ✓⇤in (1) is exactly sparse with k non-zero elements indexed by the support set S, so that ✓⇤ Sc = 0. (C2) Each row of the design matrix X 2 Rn⇥p is i.i.d. sampled from a zero-mean distribution with covariance matrix ⌃such that for any v 2 Rp, the variable hv, Xii is sub-Gaussian with parameter at most ukvk2 for every row of X, Xi. Our next assumption is on the covariance matrix of the covariate random vector: (C3) The covariance matrix ⌃of X satisfies that for all w 2 Rp, k⌃wk1 ≥` kwk1 with fixed constant ` > 0. Moreover, ⌃is approximately sparse, along the lines of [17]: for some 5 positive constant D, ⌃ii D for all diagonal entries, and moreover, for some 0 q < 1 and c0, maxi Pp j=1 |⌃ij|q c0. If q = 0, then this condition will be equivalent with ⌃being sparse. We also introduce some notations used in the following theorem. Under the condition (C2), we have that with high probability, |h✓⇤, x(i)i| 2uk✓⇤k2 plog n for all samples, i = 1, . . . , n. Let ⌧⇤:= 2uk✓⇤k2 plog n. We then let M0 be the subset of M such that M0 := n µ : µ = A0(↵) , where ↵2 [−⌧⇤, ⌧⇤] o . (7) We also define u,A and `,A on the upper bounds of A00(·) and (A−1)0(·), respectively: max ↵2[−⌧⇤,⌧⇤] |A00(↵)| u,A , max a2M0[ ¯ M |(A−1)0(a)| `,A. (8) Armed with these conditions and notations, we derive our main theorem: Theorem 1. Consider any generalized linear model in (1) where all the conditions (C1), (C2) and (C3) hold. Now, suppose that we solve the estimation problem (4) setting the thresholding parameter ⌫= C1 q log p0 n where C1 := 16(maxj ⌃jj) p 10⌧for any constant ⌧> 2, and p0 := max{n, p}. Furthermore, suppose also that we set the constraint bound λn as C2 q log p0 n + E where C2 := 2 ` ⇣ u`,A p2u,A + C1k✓⇤k1 ⌘ and where E depends on the approximation error induced by the projection (3), and is defined as: E := maxi=1,...,n ⇣ y(i) − ⇥ ⇧¯ M(y) ⇤ i ⌘ 4u`,A ` q log p0 n . (A) Then, as long as n > " 2c1c0 ` # 2 1−q log p0 where c1 is a constant related only on ⌧and maxi ⌃ii, any optimal solution b✓of (4) is guaranteed to be consistent: 77b✓−✓⇤77 1 2 ✓ C2 q log p0 n + E ◆ , 77b✓−✓⇤77 2 4 p k ✓ C2 q log p0 n + E ◆ , 77b✓−✓⇤77 1 8k ✓ C2 q log p0 n + E ◆ . (B) Moreover, the support set of the estimate b✓correctly excludes all true zero values of ✓⇤. Moreover, when mins2S |✓⇤ s| ≥3λn, it correctly includes all non-zero true supports of ✓⇤, with probability at least 1 −cp0−c0 for some universal constants c, c0 > 0 depending on ⌧and u. Remark 1. While our class of closed-form estimators and analyses consider sparse-structured parameters, these can be seamlessly extended to more general structures (such as group sparsity and low rank), using appropriate thresholding functions. Remark 2. The condition (C3) required in Theorem 1 is different from (and possibly stronger) than the restricted strong convexity [8] required for `2 error bound of `1 regularized MLE. A key facet of our analysis with our Condition (C3) however is that it provides much simpler and clearer identifying constants in our non-asymptotic error bounds. Deriving constant factors in the analysis of the `1-regularized MLE on the other hand, with its restricted strong convexity condition, involves many probabilistic statements, and is non-trivial, as shown in [8]. Another key facet of our analysis in Theorem 1 is that it also provides an `1 error bound, and guarantees the sparsistency of our closed-form estimator. For `1 regularized MLEs, this requires a separate sparsistency analysis. In the case of the simplest standard linear regression models, [18] showed that the incoherence condition of |||⌃ScS⌃−1 SS|||1 < 1 is required for sparsistency, where ||| · |||1 is the maximum of absolute row sum. As discussed in [18], instances of such incoherent covariance matrices ⌃include the identity, and Toeplitz matrices: these matrices can be seen to also satisfy our condition (C3). On the other hand, not all matrices that satisfy our condition (C3) need satisfy the stringent incoherence condition in turn. For example, consider ⌃where ⌃SS = 0.95I3 + 0.0513⇥3 for a matrix 1 of ones, ⌃SSc is all zeros but the last column is 0.413⇥1, and ⌃ScSc = I(p−3)⇥(p−3). Then, this positive definite ⌃can be seen to satisfy our Condition (C3), since each row has only 4 non-zeros. However, |||⌃ScS⌃−1 SS|||1 is equal to 1.0909 and larger than 1, and consequently, the incoherence condition required for the Lasso will not be satisfied. We defer relaxing our condition (C3) further as well as a deeper investigation of all the above conditions to future work. 6 Remark 3. The constant C2 in the statement depends on k✓⇤k1, which in the worst case where only k✓⇤k2 is bounded, may scale with p k. On the other hand, our theorem does not require an explicit sample complexity condition that n be larger than some function on k, while the analysis of `1-regularized MLEs do additionally require that n ≥c k log p for some constant c. In our experiments, we verify that our closed-form estimators outperform the `1-regularized MLEs even when k is fairly large (for instant, when (n, p, k) = (5000, 104, 1000)). In order to apply Theorem 1 to a specific instance of GLMs, we need to specify the quantities in (8), as well as carefully construct a subset M of the interior of the response moment polytope. In case of the simplest linear models described in Section 2, we have the identity mapping µ = A0(⌘) = ⌘. The inequalities in (8) can thus be seen to be satisfied with `,A = u,A = 1 . Moreover, we can set M := Mo = R so that ⇧¯ M(y) = y, and trivially recover the previous results in [14] as a special case. In the following sections, we will derive the consequences of our framework for the complex instances of logistic and Poisson regression models, which are also important members in GLMs. 4 Key Corollaries In order to derive corollaries of our main Theorem 1, we need to specify the response polytope subsets M, M0 in (2) and (7) respectively, as well as bound the two quantities `,A and u,A in (8). Logistic regression models. The exponential family log-partition function of logistic regression models described in Section 2 can be seen to be A(⌘) = log ⇥ exp(−⌘) + exp(⌘) ⇤ . Consequently, its double derivative A00(⌘) = 4 exp(2⌘) (exp(2⌘)+1)2 1 for any ⌘, so that (8) holds with u,A = 1. The response moment polytope for the binary response variable y 2 Y ⌘{−1, 1} is the interval M = [−1, 1], so that its interior is given by Mo = (−1, 1). For the subset of the interior, we define M = [−1 + ✏, 1 −✏], for some 0 < ✏< 1. At the same time, the forward mapping is given by A0(⌘) = exp(2⌘) −1)/(exp(2⌘) + 1), and hence M0 becomes [−a−1 a+1, a−1 a+1] where a := n 4uk✓⇤k2 plog n . The inverse mapping of logistic models is given by (A0)−1(µ) = 1 2 log " 1+µ 1−µ # , and given M and M0, it can be seen that (A0)−1(µ) is Lipschitz for M [ M0 with constant less than `,A := max n 1 2 + 1 2n 4uk✓⇤k2 plog n , 1/✏ o in (8). Note that with this setting of the subset M, we have that maxi=1,...,n(y(i) − ⇥ ⇧¯ M(y) ⇤ i) = ✏, and moreover, ⇧¯ M(yi) = yi(1 −✏), which we will use in the corollary below. Poisson regression models. Another important instance of GLMs is the Poisson regression model, that is becoming increasingly more relevant in modern big-data settings with varied multivariate count data. For the Poisson regression model case, the double derivative of A(·) is not uniformly upper bounded: A00(u) = exp(u). Denoting ⌧⇤:= 2uk✓⇤k2 plog n, we then have that for any ↵in [−⌧⇤, ⌧⇤], A00(↵) exp " 2σuk✓⇤k2 plog n # = n2σuk✓⇤k2/plog n, so that (8) is satisfied with u,A = n2σuk✓⇤k2/plog n. The response moment polytope for the count-valued response variable y 2 Y ⌘{0, 1, . . .} is given by M = [0, 1), so that its interior is given by Mo = (0, 1). For the subset of the interior, we define M = [✏, 1) for some ✏s.t. 0 < ✏< 1. The forward mapping in this case is simply given by A0(⌘) = exp(⌘), and M0 in (7) becomes [a−1, a] where a is n 2uk✓⇤k2 plog n . The inverse mapping for the Poisson regression model then is given by (A0)−1(µ) = log(µ), which can be seen to be Lipschitz for M with constant `,A = max{n 2uk✓⇤k2 plog n , 1/✏} in (8). With this setting of M, it can be seen that the projection operator is given by ⇧¯ M(yi) = I(yi = 0)✏+ I(yi 6= 0)yi. Now, we are ready to recover the error bounds, as a corollary of Theorem 1, for logistic regression and Poisson models when condition (C2) holds: Corollary 1. Consider any logistic regression model or a Poisson regression model where all conditions in Theorem 1 hold. Suppose that we solve our closed-form estimation problem (4), setting the thresholding parameter ⌫= C1 q log p0 n , and the constraint bound λn = 2 ` " cplog p0 n(1/2−c0/plog n) + C1k✓⇤k1 q log p0 n # where c and c0 are some constants depending only on u, k✓⇤k2 and ✏. Then the 7 Table 1: Comparisons on simulated datasets when parameters are tuned to minimize `2 error on independent validation sets. (n, p, k) METHOD TP FP `2 ERROR TIME (n = 2000, `1 MLE1 1 0.1094 4.5450 63.9 p = 5000, `1 MLE2 1 0.0873 4.0721 133.1 k = 10) `1 MLE3 1 0.1000 3.4846 348.3 ELEM 0.9900 0.0184 2.7375 26.5 (n = 4000, `1 MLE1 1 0.1626 4.2132 155.5 p = 5000, `1 MLE2 1 0.1327 3.6569 296.8 k = 10) `1 MLE3 1 0.1112 2.9681 829.3 ELEM 1 0.0069 2.6213 40.2 (n = 5000, `1 MLE1 1 0.1301 18.9079 500.1 p = 104, `1 MLE2 1 0.1695 18.5567 983.8 k = 100) `1 MLE3 1 0.2001 18.2351 2353.3 ELEM 0.9975 0.3622 16.4148 151.8 (n, p, k) METHOD TP FP `2 ERROR TIME (n = 5000, `1 MLE1 0.7990 1 65.1895 520.7 p = 104, `1 MLE2 0.7935 1 65.1165 1005.8 k = 1000) `1 MLE3 0.7965 1 65.1024 2560.1 ELEM 0.8295 1 63.2359 152.1 (n = 8000, `1 MLE1 1 0.1904 18.6186 810.6 p = 104, `1 MLE2 1 0.2181 18.1806 1586.2 k = 100) `1 MLE3 1 0.2364 17.6762 3568.9 ELEM 0.9450 0.0359 11.9881 221.1 (n = 8000, `1 MLE1 0.7965 1 65.0714 809.5 p = 104, `1 MLE2 0.7900 1 64.9650 1652.8 k = 1000) `1 MLE3 0.7865 1 64.8857 4196.6 ELEM 0.7015 0.5103 61.0532 219.4 optimal solution b✓of (4) is guaranteed to be consistent: !!b✓−✓⇤!! 1 4 ` ✓ cplog p0 n(1/2−c0/plog n) + C1k✓⇤k1 r log p0 n ◆ , !!b✓−✓⇤!! 2 8 p k ` ✓ cplog p0 n(1/2−c0/plog n) + C1k✓⇤k1 r log p0 n ◆ , !!b✓−✓⇤!! 1 16k ` ✓ cplog p0 n(1/2−c0/plog n) + C1k✓⇤k1 r log p0 n ◆ , with probability at least 1 −c1p0−c0 1 for some universal constants c1, c0 1 > 0 and p0 := max{n, p}. Moreover, when mins2S |✓⇤ s| ≥ 6 ` " cplog p0 n(1/2−c0/plog n) + C1k✓⇤k1 q log p0 n # , b✓is sparsistent. Remarkably, the rates in Corollary 1 are asymptotically comparable to those for the `1-regularized MLE (see for instance Theorem 4.2 and Corollary 4.4 in [7]). In Appendix A, we place slightly more stringent condition than (C2) and guarantee error bounds with faster convergence rates. 5 Experiments We corroborate the performance of our elementary estimators on simulated data over varied regimes of sample size n, number of covariates p, and sparsity size k. We consider two popular instances of GLMs, logistic and Poisson regression models. We compare against standard `1 regularized MLE estimators with iteration bounds of 50, 100, and 500, denoted by `1 MLE1, `1 MLE2 and `1 MLE3 respectively. We construct the n ⇥p design matrices X by sampling the rows independently from N(0, ⌃) where ⌃i,j = 0.5|i−j|. For each simulation, the entries of the true model coefficient vector ✓⇤are set to be 0 everywhere, except for a randomly chosen subset of k coefficients, which are chosen independently and uniformly in the interval (1, 3). We report results averaged over 100 independent trials. Noting that our theoretical results were not sensitive to the setting of ✏in ⇧¯ M(y), we simply report the results when ✏= 10−4 across all experiments. While our theorem specified an optimal setting of the regularization parameter λn and ⌫, this optimal setting depended on unknown model parameters. Thus, as is standard with high-dimensional regularized estimators, we set tuning parameters λn = c p log p/n and ⌫= c0p log p/n by a holdoutvalidated fashion; finding a parameter that minimizes the `2 error on an independent validation set. Detailed experimental setup is described in the appendix. Table 1 summarizes the performances of `1 MLE using 3 different stopping criteria and Elem-GLM. Besides `2 errors, the target tuning metric, we also provide the true and false positives for the support set recovery task on the new test set where the best tuning parameters are used. The computation times in second indicate the overall training computation time summing over the whole parameter tuning process. As we can see from our experiments, with respect to both statistical and computational performance our closed form estimators are quite competitive compared to the classical `1 regularized MLE estimators and in certain case outperform them. Note that `1 MLE1 stops prematurely after only 50 iterations, so that training computation time is sometimes comparable to closed-form estimator. However, its statistical performance measured by `2 is much inferior to other `1 MLEs with more iterations as well as Elem-GLM estimator. Due to the space limit, ROC curves, results for other settings of p and more experiments on real datasets are presented in the appendix. 8 References [1] P. McCullagh and J.A. Nelder. Generalized linear models. Monographs on statistics and applied probability 37. Chapman and Hall/CRC, 1989. [2] G. E. Hoffman, B. A. Logsdon, and J. G. Mezey. Puma: A unified framework for penalized multiple regression analysis of gwas data. Plos computational Biology, 2013. [3] D. Witten and R. Tibshirani. Survival analysis with high-dimensional covariates. Stat Methods Med Res., 19:29–51, 2010. [4] E. Yang, P. Ravikumar, G. I. Allen, and Z. Liu. Graphical models via generalized linear models. In Neur. Info. Proc. Sys. (NIPS), 25, 2012. [5] S. Van de Geer. High-dimensional generalized linear models and the lasso. Annals of Statistics, 36(2): 614–645, 2008. [6] F. Bach. Self-concordant analysis for logistic regression. Electron. J. Stat., 4:384–414, 2010. [7] S. M. Kakade, O. Shamir, K. Sridharan, and A. Tewari. Learning exponential families in high-dimensions: Strong convexity and sparsity. In Inter. Conf. on AI and Statistics (AISTATS), 2010. [8] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. Arxiv preprint arXiv:1010.2731v1, 2010. [9] F. Bunea. Honest variable selection in linear and logistic regression models via l1 and l1 + l2 penalization. Electron. J. Stat., 2:1153–1194, 2008. [10] L. Meier, S. Van de Geer, and P. B¨uhlmann. The group lasso for logistic regression. Journal of the Royal Statistical Society, Series B, 70:53–71, 2008. [11] Y. Kim, J. Kim, and Y. Kim. Blockwise sparse regression. Statistica Sinica, 16:375–390, 2006. [12] J. Friedman, T. Hastie, and R. Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22, 2010. [13] K. Koh, S. J. Kim, and S. Boyd. An interior-point method for large-scale `1-regularized logistic regression. Jour. Mach. Learning Res., 3:1519–1555, 2007. [14] E. Yang, A. C. Lozano, and P. Ravikumar. Elementary estimators for high-dimensional linear regression. In International Conference on Machine learning (ICML), 31, 2014. [15] A. J. Rothman, E. Levina, and J. Zhu. Generalized thresholding of large covariance matrices. Journal of the American Statistical Association (Theory and Methods), 104:177–186, 2009. [16] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families and variational inference. Foundations and Trends in Machine Learning, 1(1–2):1—305, December 2008. [17] P. J. Bickel and E. Levina. Covariance regularization by thresholding. Annals of Statistics, 36(6):2577– 2604, 2008. [18] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using `1-constrained quadratic programming (Lasso). IEEE Trans. Information Theory, 55:2183–2202, May 2009. [19] Daniel A. Spielman and Shang-Hua Teng. Solving sparse, symmetric, diagonally-dominant linear systems in time 0(m1.31). In 44th Symposium on Foundations of Computer Science (FOCS 2003), 11-14 October 2003, Cambridge, MA, USA, Proceedings, pages 416–427, 2003. [20] Michael B. Cohen, Rasmus Kyng, Gary L. Miller, Jakub W. Pachocki, Richard Peng, Anup B. Rao, and Shen Chen Xu. Solving sdd linear systems in nearly mlog1/2n time. In Proceedings of the 46th Annual ACM Symposium on Theory of Computing, STOC ’14, pages 343–352. ACM, 2014. [21] Daniel A. Spielman and Shang-Hua Teng. Nearly linear time algorithms for preconditioning and solving symmetric, diagonally dominant linear systems. SIAM J. Matrix Analysis Applications, 35(3):835–885, 2014. [22] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing `1-penalized log-determinant divergence. Electronic Journal of Statistics, 5:935–980, 2011. [23] E. Yang, A. C. Lozano, and P. Ravikumar. Elementary estimators for sparse covariance matrices and other structured moments. In International Conference on Machine learning (ICML), 31, 2014. [24] E. Yang, A. C. Lozano, and P. Ravikumar. Elementary estimators for graphical models. In Neur. Info. Proc. Sys. (NIPS), 27, 2014. 9 | 2015 | 44 |
5,933 | Expressing an Image Stream with a Sequence of Natural Sentences Cesc Chunseong Park Gunhee Kim Seoul National University, Seoul, Korea {park.chunseong,gunhee}@snu.ac.kr https://github.com/cesc-park/CRCN Abstract We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their special moments, it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. To this end, we design a multimodal architecture called coherent recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional recurrent neural networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We demonstrate that our approach outperforms other state-of-the-art candidate methods, using both quantitative measures (e.g. BLEU and top-K recall) and user studies via Amazon Mechanical Turk. 1 Introduction Recently there has been a hike of interest in automatically generating natural language descriptions for images in the research of computer vision, natural language processing, and machine learning (e.g. [5, 8, 9, 12, 14, 15, 26, 21, 30]). While most of existing work aims at discovering the relation between a single image and a single natural sentence, we extend both input and output dimension to a sequence of images and a sequence of sentences, which may be an obvious next step toward joint understanding of the visual content of images and language descriptions, albeit under-addressed in current literature. Our problem setup is motivated by that general users often take a series of pictures on their memorable moments. For example, many people who visit New York City (NYC) would capture their experiences with large image streams, and thus it would better take the whole photo stream into consideration for the translation to a natural language description. Figure 1: An intuition of our problem statement with a New York City example. We aim at expressing an image stream with a sequence of natural sentences. (a) We leverage natural blog posts to learn the relation between image streams and sentence sequences. (b) We propose coherent recurrent convolutional networks (CRCN) that integrate convolutional networks, bidirectional recurrent networks, and the entity-based coherence model. 1 Fig.1 illustrates an intuition of our problem statement with an example of visiting NYC. Our objective is, given a photo stream, to automatically produce a sequence of natural language sentences that best describe the essence of the input image set. We propose a novel multimodal architecture named coherent recurrent convolutional networks (CRCN) that integrate convolutional neural networks for image description [13], bidirectional recurrent neural networks for the language model [20], and the local coherence model [1] for a smooth flow of multiple sentences. Since our problem deals with learning the semantic relations between long streams of images and text, it is more challenging to obtain appropriate text-image parallel corpus than previous research of single sentence generation. Our idea to this issue is to directly leverage online natural blog posts as text-image parallel training data, because usually a blog consists of a sequence of informative text and multiple representative images that are carefully selected by authors in a way of storytelling. See an example in Fig.1.(a). We evaluate our approach with the blog datasets of the NYC and Disneyland, consisting of more than 20K blog posts with 140K associated images. Although we focus on the tourism topics in our experiments, our approach is completely unsupervised and thus applicable to any domain that has a large set of blog posts with images. We demonstrate the superior performance of our approach by comparing with other state-of-the-art alternatives, including [9, 12, 21]. We evaluate with quantitative measures (e.g. BLEU and Top-K recall) and user studies via Amazon Mechanical Turk (AMT). Related work. Due to a recent surge of volume of literature on this subject of generating natural language descriptions for image data, here we discuss a representative selection of ideas that are closely related to our work. One of the most popular approaches is to pose the text generation as a retrieval problem that learns ranking and embedding, in which the caption of a test image is transferred from the sentences of its most similar training images [6, 8, 21, 26]. Our approach partly involves the text retrieval, because we search for candidate sentences for each image of a query sequence from training database. However, we then create a final paragraph by considering both compatibilities between individual images and text, and the coherence that captures text relatedness at the level of sentence-to-sentence transitions. There have been also video-sentence works (e.g. [23, 32]); our key novelty is that we explicitly include the coherence model. Unlike videos, consecutive images in the streams may show sharp changes of visual content, which cause the abrupt discontinuity between consecutive sentences. Thus the coherence model is more demanded to make output passages fluent. Many recent works have exploited multimodal networks that combine deep convolutional neural networks (CNN) [13] and recurrent neural network (RNN) [20]. Notable architectures in this category integrate the CNN with bidirectional RNNs [9], long-term recurrent convolutional nets [5], longshort term memory nets [30], deep Boltzmann machines [27], dependency-tree RNN [26], and other variants of multimodal RNNs [3, 19]. Although our method partly take advantage of such recent progress of multimodal neural networks, our major novelty is that we integrate it with the coherence model as a unified end-to-end architecture to retrieve fluent sequential multiple sentences. In the following, we compare more previous work that bears a particular resemblance to ours. Among multimodal neural network models, the long-term recurrent convolutional net [5] is related to our objective because their framework explicitly models the relations between sequential inputs and outputs. However, the model is applied to a video description task of creating a sentence for a given short video clip and does not address the generation of multiple sequential sentences. Hence, unlike ours, there is no mechanism for the coherence between sentences. The work of [11] addresses the retrieval of image sequences for a query paragraph, which is the opposite direction of our problem. They propose a latent structural SVM framework to learn the semantic relevance relations from text to image sequences. However, their model is specialized only for the image sequence retrieval, and thus not applicable to the natural sentence generation. Contributions. We highlight main contributions of this paper as follows. (1) To the best of our knowledge, this work is the first to address the problem of expressing image streams with sentence sequences. We extend both input and output to more elaborate forms with respect to a whole body of existing methods: image streams instead of individual images and sentence sequences instead of individual sentences. (2) We develop a multimodal architecture of coherent recurrent convolutional networks (CRCN), which integrates convolutional networks for image representation, recurrent networks for sentence modeling, and the local coherence model for fluent transitions of sentences. (3) We evaluate our method with large datasets of unstructured blog posts, consisting of 20K blog posts with 140K associated images. With both quantitative evaluation and user studies, we show that our approach is more successful than other state-of-the-art alternatives in verbalizing an image stream. 2 2 Text-Image Parallel Dataset from Blog Posts We discuss how to transform blog posts to a training set B of image-text parallel data streams, each of which is a sequence of image-sentence pairs: Bl = {(Il 1, T l 1),· · ·, (Il N l, T l N l)} ∈B. The training set size is denoted by L = |B|. Fig.2.(a) shows the summary of pre-processing steps for blog posts. 2.1 Blog Pre-processing We assume that blog authors augment their text with multiple images in a semantically meaningful manner. In order to decompose each blog into a sequence of images and associated text, we first perform text segmentation and then text summarization. The purpose of text segmentation is to divide the input blog text into a set of text segments, each of which is associated with a single image. Thus, the number of segments is identical to the number of images in the blog. The objective of text summarization is to reduce each text segment into a single key sentence. As a result of these two processes, we can transform each blog into a form of Bl = {(Il 1, T l 1 ), · · · , (Il N l, T l N l)}. Text segmentation. We first divide the blog passage into text blocks according to paragraphs. We apply a standard paragraph tokenizer of NLTK [2] that uses rule-based regular expressions to detect paragraph divisions. We then use the heuristics based on the image-to-text block distances proposed in [10]. Simply, we assign each text block to the image that has the minimum index distance where each text block and image is counted as a single index distance in the blog. Text summarization. We summarize each text segment into a single key sentence. We apply the Latent Semantic Analysis (LSA)-based summarization method [4], which uses the singular value decomposition to obtain the concept dimension of sentences, and then recursively finds the most representative sentences that maximize the inter-sentence similarity for each topic in a text segment. Data augmentation. The data augmentation is a well-known technique for convolutional neural networks to improve image classification accuracies [13]. Its basic idea is to artificially increase the number of training examples by applying transformations, horizontal reflection or adding noise to training images. We empirically observe that this idea leads better performance in our problem as well. For each image-sentence sequence Bl = {(Il 1, T l 1), · · · , (Il N l, T l N l)}, we augment each sentence T l n with multiple sentences for training. That is, when we perform the LSA-based text summarization, we select top-κ highest ranked summary sentences, among which the top-ranked one becomes the summary sentence for the associated image, and all the top-κ ones are used for training in our model. With a slight abuse of notation, we let T l n to denote both the single summary sentence and κ augmented sentences. We choose κ = 3 after thorough empirical tests. 2.2 Text Description Once we represent each text segment with κ sentences, we extract the paragraph vector [17] to represent the content of text. The paragraph vector is a neural-network based unsupervised algorithm that learns fixed-length feature representation from variable-length pieces of passage. We learn 300dimensional dense vector representation separately from the two classes of the blog dataset using the gensim doc2vec code. We use pn to denote the paragraph vector representation for text Tn. We then extract a parsed tree for each Tn to identify coreferent entities and grammatical roles of the words. We use the Stanford core NLP library [18]. The parse trees are used for the local coherence model, which will be discussed in section 3.2. 3 Our Architecture Many existing sentence generation models (e.g. [9, 19]) combine words or phrases from training data to generate a sentence for a novel image. Our approach is one level higher; we use sentences from training database to author a sequence of sentences for a novel image stream. Although our model can be easily extended to use words or phrases as basic building blocks, such granularity makes sequences too long to train the language model, which may cause several difficulties for learning the RNN models. For example, the vanishing gradient effect is a well-known hardship to backpropagate an error signal through a long-range temporal interval. Therefore, we design our approach that retrieves individual candidate sentences for each query image from training database and crafts a best sentence sequence, considering both the fitness of individual image-to-sentence pairs and coherence between consecutive sentences. 3 Figure 2: Illustration of (a) pre-processing steps of blog posts, and (b) the proposed CRCN architecture. Fig.2.(b) illustrates the structure of our CRCN. It consists of three main components, which are convolutional neural networks (CNN) [13] for image representation, bidirectional recurrent neural networks (BRNN) [24] for sentence sequence modeling, and the local coherence model [1] for a smooth flow of multiple sentences. Each data stream is a variable-length sequence denoted by {(I1, T1), · · · , (IN, TN)}. We use t ∈{1, · · · , N} to denote a position of a sentence/image in a sequence. We define the CNN and BRNN model for each position separately, and the coherent model for a whole data stream. For the CNN component, our choice is the VGGNet [25] that represents images as 4,096-dimensional vectors. We discuss the details of our BRNN and coherence model in section 3.1 and section 3.2 respectively, and finally present how to combine the output of the three components to create a single compatibility score in section 3.3. 3.1 The BRNN Model The role of BRNN model is to represent a content flow of text sequences. In our problem, the BRNN is more suitable than the normal RNN, because the BRNN can simultaneously model forward and backward streams, which allow us to consider both previous and next sentences for each sentence to make the content of a whole sequence interact with one another. As shown in Fig.2.(b), our BRNN has five layers: input layer, forward/backward layer, output layer, and ReLU activation layer, which are finally merged with that of the coherent model into two fully connected layers. Note that each text is represented by 300-dimensional paragraph vector pt as discussed in section 2.2. The exact form of our BRNN is as follows. See Fig.2.(b) together for better understanding. xf t = f(W f i pt + bf i ); xb t = f(W b i pt + bb i); (1) hf t = f(xf t + Wfhf t−1 + bf); hb t = f(xb t + Wbhb t+1 + bb); ot = Wo(hf t + hb t) + bo. The BRNN takes a sequence of text vectors pt as input. We then compute xf t and xb t, which are the activations of input units to forward and backward units. Unlike other BRNN models, we separate the input activation into forward and backward ones with different sets of parameters W f i and W b i , which empirically leads a better performance. We set the activation function f to the Rectified Linear Unit (ReLU), f(x) = max(0, x). Then, we create two independent forward and backward hidden units, denoted by hf t and hb t. The final activation of the BRNN ot can be regarded as a description for the content of the sentence at location t, which also implicitly encodes the flow of the sentence and its surrounding context in the sequence. The parameter sets to learn include weights {W f i , W b i , Wf, Wb, Wo} ∈R300×300 and biases {bf i , bb i, bf, bb, bo} ∈R300×1. 3.2 The Local Coherence Model The BRNN model can capture the flow of text content, but it lacks learning the coherence of passage that reflects distributional, syntactic, and referential information between discourse entities. Thus, we explicitly include a local coherence model based on the work of [1], which focuses on resolving the patterns of local transitions of discourse entities (i.e. coreferent noun phrases) in the whole text. As shown in Fig.2.(b), we first extract parse trees for every summarized text denoted by Zt and then concatenate all sequenced parse trees into one large one, from which we make an entity grid for the whole sequence. The entity grid is a table where each row corresponds to a discourse 4 entity and each column represents a sentence. Grammatical role are expressed by three categories and one for absent (i.e. not referenced in the sentence): S (subjects), O (objects), X (other than subject or object) and −(absent). After making the entity grid, we enumerate the transitions of the grammatical roles of entities in the whole text. We set the history parameter to three, which means we can obtain 43 = 64 transition descriptions (e.g. SO−or OOX). By computing the ratio of the occurrence frequency of each transition, we finally create a 64-dimensional representation that captures the coherence of a sequence. Finally, we make this descriptor to a 300-dimensional vector by zero-padding, and forward it to ReLU layer as done for the BRNN output. 3.3 Combination of CNN, RNN, and Coherence Model After the ReLU activation layers of the RNN and the coherence model, their output (i.e. {ot}N t=1 and q) goes through two fully connected (FC) layers, whose role is to decide a proper combination of the BRNN language factors and the coherence factors. We drop the bias terms for the fully-connected layers, and the dimensions of variables are Wf1 ∈R512×300, Wf2 ∈R4,096×512 , ot, q ∈R300×1 , st, g ∈R4,096×1, O ∈R300×N, and S ∈R4,096×N. O = [o1|o2|..|oN]; S = [s1|s2|..|sN]; Wf2Wf1[O|q] = [S|g]. (2) We use the shared parameters for O and q so that the output mixes well the interaction between the content flows and coherency. In our tests, joint learning outperforms learning the two terms with separate parameters. Note that the multiplication Wf2Wf1 of the last two FC layers does not reduce to a single linear mapping, thanks to dropout. We assign 0.5 and 0.7 dropout rates to the two layers. Empirically, it improves generalization performance much over a single FC layer with dropout. 3.4 Training the CRCN To train our CRCN model, we first define the compatibility score between an image stream and a paragraph sequence. While our score function is inspired by Karpathy et al. [9], there are two major differences. First, the score function of [9] deals between sentence fragments and image fragments, and thus the algorithm considers all combinations between them to find out the best matching. On the other hand, we define the score by an ordered and paired compatibility between a sentence sequence and an image sequence. Second, we also add the term that measures the relevance relation of coherency between an image sequence and a text sequence. Finally, the score Skl for a sentence sequence k and an image stream l is defined by Skl = X t=1...N sk t · vl t + gk · vl t (3) where vl t denotes the CNN feature vector for t-th image of stream l. We then define the cost function to train our CRCN model as follows [9]. C(θ) = X k h X l max(0, 1 + Skl −Skk) + X l max(0, 1 + Slk −Skk) i , (4) where Skk denotes the score between a training pair of corresponding image and sentence sequence. The objective, based on the max-margin structured loss, encourages aligned image-sentence sequence pairs to have a higher score by a margin than misaligned pairs. For each positive training example, we randomly sample 100 ne examples from the training set. Since each contrastive example has a random length, and is sampled from the dataset of a wide range of content, it is extremely unlikely that the negative examples have the same length and the same content order of sentences with positive examples. Optimization. We use the backpropagation through time (BPTT) algorithm [31] to train our model. We apply the stochastic gradient descent (SGD) with mini-batches of 100 data streams. Among many SGD techniques, we select RMSprop optimizer [28], which leads the best performance in our experiments. We initialize the weights of our CRCN model using the method of He et al. [7], which is robust in deep rectified models. We observe that it is better than a simple Gaussian random initialization, although our model is not extremely deep. We use dropout regularization in all layers except the BRNN, with 0.7 dropout for the last FC layer and 0.5 for the other remaining layers. 5 3.5 Retrieval of Sentence Sequences At test time, the objective is to retrieve a best sentence sequence for a given query image stream {Iq1, · · · , IqN}. First, we select K-nearest images for each query image from training database using the ℓ2-distance on the CNN VGGNet fc7 features [25]. In our experiments K = 5 is successful. We then generate a set of sentence sequence candidates C by concatenating the sentences associated with the K-nearest images at each location t. Finally, we use our learned CRCN model to compute the compatibility score between the query image stream and each sequence candidate, according to which we rank the candidates. However, one major difficulty of this scenario is that there are exponentially many candidates (i.e. |C| = KN). To resolve this issue, we use an approximate divide-and-conquer strategy; we recursively halve the problem into subproblems, until the size of the subproblem is manageable. For example, if we halve the search candidate length Q times, then the search space of each subproblem becomes KN/2Q. Using the beam search idea, we first find the top-M best sequence candidates in the subproblem of the lowest level, and recursively increase the candidate lengths while the maximum candidate size is limited to M. We set M = 50. Though it is an approximate search, our experiments assure that it achieves almost optimal solutions with plausible combinatorial search, mainly because the local fluency and coherence is undoubtedly necessary for the global one. That is, in order for a whole sentence sequence to be fluent and coherent, its any subparts must be as well. 4 Experiments We compare the performance of our approach with other state-of-the-art candidate methods via quantitative measures and user studies using Amazon Mechanical Turk (AMT). Please refer to the supplementary material for more results and the details of implementation and experimental setting. 4.1 Experimental Setting Dataset. We collect blog datasets of the two topics: NYC and Disneyland. We reuse the blog data of Disneyland from the dataset of [11], and newly collect the data of NYC, using the same crawling method with [11], in which we first crawl blog posts and their associated pictures from two popular blog publishing sites, BLOGSPOT and WORDPRESS by changing query terms from Google search. Then, we manually select the travelogue posts that describe stories and events with multiple images. Finally, the dataset includes 11,863 unique blog posts and 78,467 images for NYC and 7,717 blog posts and 60,545 images for Disneyland. Task. For quantitative evaluation, we randomly split our dataset into 80% as a training set, 10% as a validation, and the others as a test set. For each test post, we use the image sequence as a query Iq and the sequence of summarized sentences as groundtruth TG. Each algorithm retrieves the best sequences from training database for a query image sequence, and ideally the retrieved sequences match well with TG. Since the training and test data are disjoint, each algorithm can only retrieve similar (but not identical) sentences at best. For quantitative measures, we exploit two types of metrics of language similarity (i.e. BLEU [22], CIDEr [29], and METEOR [16] scores) and retrieval accuracies (i.e. top-K recall and median rank), which are popularly used in text generation literature [8, 9, 19, 26]. The top-K recall R@K is the recall rate of a groundtruth retrieval given top K candidates, and the median rank indicates the median ranking value of the first retrieved groundtruth. A better performance is indicated by higher BLEU, CIDEr, METEOR, R@K scores, and lower median rank values. Baselines. Since the sentence sequence generation from image streams has not been addressed yet in previous research, we instead extend several state-of-the-art single-sentence models that have publicly available codes as baselines, including the log-bilinear multimodal models by Kiros et al. [12], and recurrent convolutional models by Karpathy et al. [9] and Vinyals et al. [30]. For [12], we use the three variants introduced in the paper, which are the standard log-bilinear model (LBL), and two multi-modal extensions: modality-based LBL (MLBL-B) and factored three-way LBL (MLBL-F). We use the NeuralTalk package authored by Karpathy et al. for the baseline of [9] denoted by (CNN+RNN), and [30] denoted by (CNN+LSTM). As the simplest baseline, we also compare with the global matching (GloMatch) in [21]. For all the baselines, we create final sentence sequences by concatenating the sentences generated for each image in the query stream. 6 Language metrics Retrieval metrics B-1 B-2 B-3 B-4 CIDEr METEOR R@1 R@5 R@10 MedRank New York City (CNN+LSTM) [30] 16.24 5.79 1.38 0.10 9.1 5.73 0.95 7.38 13.33 88.5 (CNN+RNN) [9] 6.21 0.01 0.00 0.00 0.5 1.34 0.48 2.86 4.29 120.5 (MLBL-F) [12] 21.03 1.92 0.12 0.01 4.3 6.03 0.71 4.52 7.86 87.0 (MLBL-B) [12] 20.43 1.54 0.09 0.01 2.6 5.30 0.48 3.57 5.48 101.5 (LBL) [12] 20.96 1.68 0.08 0.01 2.6 5.29 1.19 4.52 7.38 100.5 (GloMatch) [21] 19.00 1.59 0.04 0.0 2.80 5.17 0.24 2.62 4.05 95.00 (1NN) 25.97 3.42 0.60 0.22 15.9 7.06 5.95 13.57 20.71 63.50 (RCN) 27.09 5.45 2.56 2.10 33.5 7.87 3.80 18.33 30.24 29.00 (CRCN) 26.83 5.37 2.57 2.08 30.9 7.69 11.67 31.19 43.57 14.00 Disneyland (CNN+LSTM) [30] 13.22 1.56 0.40 0.07 10.0 4.51 2.83 10.38 16.98 61.5 (CNN+RNN) [9] 6.04 0.00 0.00 0.00 0.4 1.34 1.02 3.40 5.78 88.0 (MLBL-F) [12] 15.75 1.61 0.07 0.01 4.9 7.12 0.68 4.08 10.54 63.0 (MLBL-B) [12] 15.65 1.32 0.05 0.00 3.8 5.83 0.34 2.72 6.80 69.0 (LBL) [12] 18.94 1.70 0.06 0.01 3.4 4.99 1.02 4.08 7.82 62.0 (GloMatch) [21] 11.94 0.37 0.01 0.00 2.2 4.31 2.04 5.78 7.48 73.0 (1NN) 25.92 3.34 0.71 0.38 19.5 7.46 9.18 19.05 27.21 45.0 (RCN) 28.15 6.84 4.11 3.52 51.3 8.87 5.10 20.07 28.57 29.5 (CRCN) 28.40 6.88 4.11 3.49 52.7 8.78 14.29 31.29 43.20 16.0 Table 1: Evaluation of sentence generation for the two datasets, New York City and Disneyland, with language similarity metrics (BLEU) and retrieval metrics (R@K, median Rank). A better performance is indicated by higher BLEU, CIDEr, METEOR, R@K scores, and lower median rank values. We also compare between different variants of our method to validate the contributions of key components of our method. We test the K-nearest search (1NN) without the RNN part as the simplest variant; for each image in a test query, we find its K(= 1) most similar training images and simply concatenate their associated sentences. The second variant is the BRNN-only method denoted by (RCN) that excludes the entity-based coherence model from our approach. Our complete method is denoted by (CRCN), and this comparison quantifies the improvement by the coherence model. To be fair, we use the same VGGNet fc7 feature [25] for all the algorithms. 4.2 Quantitative Results Table 1 shows the quantitative results of experiments using both language and retrieval metrics. Our approach (CRCN) and (RCN) outperform, with large margins, other state-of-the-art baselines, which generate passages without consideration of sentence-to-sentence transitions unlike ours. The (MLBL-F) shows the best performance among the three models of [12] albeit with a small margin, partly because they share the same word dictionary in training. Among mRNN-based models, the (CNN+LSTM) significantly outperforms the (CNN+RNN), because the LSTM units help learn models from irregular and lengthy data of natural blogs more robustly. We also observe that (CRCN) outperforms (1NN) and (RCN), especially with the retrieval metrics. It shows that the integration of two key components, the BRNN and the coherence model, indeed contributes the performance improvement. The (CRCN) is only slightly better than the (RCN) in language metrics but significantly better in retrieval metrics. It means that (RCN) is fine with retrieving fairly good solutions, but not good at ranking the only correct solution high compared to (CRCN). The small margins in language metrics are also attributed by their inherent limitation; for example, the BLEU focuses on counting the matches of n-gram words and thus is not good at comparing between sentences, even worse between paragraphs for fully evaluating their fluency and coherency. Fig.3 illustrates several examples of sentence sequence retrieval. In each set, we show a query image stream and text results created by our method and baselines. Except Fig.3.(d), we show parts of sequences because they are rather long for illustration. These qualitative examples demonstrate that our approach is more successful to verbalize image sequences that include a variety of content. 4.3 User Studies via Amazon Mechanical Turk We perform user studies using AMT to observe general users’ preferences between text sequences by different algorithms. Since our evaluation involves multiple images and long passages of text, we design our AMT task to be sufficiently simple for general turkers with no background knowledge. 7 Figure 3: Examples of sentence sequence retrieval for NYC (top) and Disneyland (bottom). In each set, we present a part of a query image stream, and its corresponding text output by our method and a baseline. Baselines (GloMatch) (CNN+LSTM) (MLBL-B) (RCN) (RCN N>=8) NYC 92.7% (139/150) 80.0% (120/150) 69.3% (104/150) 54.0% (81/150) 57.0% (131/230) Disneyland 95.3% (143/150) 82.0% (123/150) 70.7% (106/150) 56.0% (84/150) 60.1% (143/238) Table 2: The results of AMT pairwise preference tests. We present the percentages of responses that turkers vote for our (CRCN) over baselines. The length of query streams is 5 except the last column, which has 8–10. We first randomly sample 100 test streams from the two datasets. We first set the maximum number of images per query to 5. If a query is longer than that, we uniformly sample it to 5. In an AMT test, we show a query image stream Iq, and a pair of passages generated by our method (CRCN) and one baseline in a random order. We ask turkers to choose more agreed text sequence with Iq. We design the test as a pairwise comparison instead of a multiple-choice question to make answering and analysis easier. The questions look very similar to the examples of Fig.3. We obtain answers from three different turkers for each query. We compare with four baselines; we choose (MLBL-B) among the three variants of [12], and (CNN+LSTM) among mRNN-based methods. We also select (GloMatch), and (RCN) as the variants of our method. Table 2 shows the results of AMT tests, which validate that AMT annotators prefer our results to those of baselines. The (GloMatch) is the worst because it uses too weak image representation (i.e. GIST and Tiny images). The differences between (CRCN) and (RCN) (i.e. 4th column of Table 2) are not as significant as previous quantitative measures, mainly because our query image stream is sampled to relatively short 5. The coherence becomes more critical as the passage is longer. To justify this argument, we run another set of AMT tests in which we use 8–10 images per query. As shown in the last column of Table 2, the performance margins between (CRCN) and (RCN) become larger as the lengths of query image streams increase. This result assures that as passages are longer, the coherence becomes more important, and thus (CRCN)’s output is more preferred by turkers. 5 Conclusion We proposed an approach for retrieving sentence sequences for an image stream. We developed coherent recurrent convolutional network (CRCN), which consists of convolutional networks, bidirectional recurrent networks, and entity-based local coherence model. With quantitative evaluation and users studies using AMT on large collections of blog posts, we demonstrated that our CRCN approach outperformed other state-of-the-art candidate methods. Acknowledgements. This research is partially supported by Hancom and Basic Science Research Program through National Research Foundation of Korea (2015R1C1A1A02036562). 8 References [1] R. Barzilay and M. Lapata. Modeling Local Coherence: An Entity-Based Approach. In ACL, 2008. [2] S. Bird, E. Loper, and E. Klein. Natural Language Processing with Python. O’Reilly Media Inc., 2009. [3] X. Chen and C. L. Zitnick. Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation. In CVPR, 2015. [4] F. Y. Y. Choi, P. Wiemer-Hastings, and J. Moore. Latent Semantic Analysis for Text Segmentation. In EMNLP, 2001. [5] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venugopalan, K. Saenko, and T. Darrell. Long-term Recurrent Convolutional Networks for Visual Recognition and Description. In CVPR, 2015. [6] Y. Gong, L. Wang, M. Hodosh, J. Hockenmaier, and S. Lazebnik. Improving Image-Sentence Embeddings Using Large Weakly Annotated Photo Collections. In ECCV, 2014. [7] K. He, X. Zhang, S. Ren, and J. Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In arXiv, 2015. [8] M. Hodosh, P. Young, and J. Hockenmaier. Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics. JAIR, 47:853–899, 2013. [9] A. Karpathy and L. Fei-Fei. Deep Visual-Semantic Alignments for Generating Image Descriptions. In CVPR, 2015. [10] G. Kim, S. Moon, and L. Sigal. Joint Photo Stream and Blog Post Summarization and Exploration. In CVPR, 2015. [11] G. Kim, S. Moon, and L. Sigal. Ranking and Retrieval of Image Sequences from Multiple Paragraph Queries. In CVPR, 2015. [12] R. Kiros, R. Salakhutdinov, and R. Zemel. Multimodal Neural Language Models. In ICML, 2014. [13] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet Classification with Deep Convolutional Neural Networks. In NIPS, 2012. [14] G. Kulkarni, V. Premraj, S. Dhar, S. Li, Y. Choi, A. C. Berg, and T. L. Berg. Baby Talk: Understanding and Generating Image Descriptions. In CVPR, 2011. [15] P. Kuznetsova, V. Ordonez, T. L. Berg, and Y. Choi. TreeTalk: Composition and Compression of Trees for Image Descriptions. In TACL, 2014. [16] S. B. A. Lavie. METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments. In ACL, 2005. [17] Q. Le and T. Mikolov. Distributed Representations of Sentences and Documents. In ICML, 2014. [18] C. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and D. McClosky. The Stanford CoreNLP Natural Language Processing Toolkit. In ACL, 2014. [19] J. Mao, W. Xu, Y. Yang, J. Wang, Z. Huang, and A. L. Yuille. Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN). In ICLR, 2015. [20] T. Mikolov. Statistical Language Models based on Neural Networks. In Ph. D. Thesis, Brno University of Technology, 2012. [21] V. Ordonez, G. Kulkarni, and T. L. Berg. Im2Text: Describing Images Using 1 Million Captioned Photographs. In NIPS, 2011. [22] K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. BLEU: A Method for Automatic Evaluation of Machine Translation. In ACL, 2002. [23] M. Rohrbach, W. Qiu, I. Titov, S. Thater, M. Pinkal, and B. Schiele. Translating Video Content to Natural Language Descriptions. In ICCV, 2013. [24] M. Schuster and K. K. Paliwal. Bidirectional Recurrent Neural Networks. In IEEE TSP, 1997. [25] K. Simonyan and A. Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR, 2015. [26] R. Socher, A. Karpathy, Q. V. Le, C. D. Manning, and A. Y. Ng. Grounded Compositional Semantics for Finding and Describing Images with Sentences. In TACL, 2013. [27] N. Srivastava and R. Salakhutdinov. Multimodal Learning with Deep Boltzmann Machines. In NIPS, 2012. [28] T. Tieleman and G. E. Hinton. Lecture 6.5 – RMSProp. In Coursera, 2012. [29] R. Vedantam, C. L. Zitnick, and D. Parikh. CIDEr: Consensus-based Image Description Evaluation. In arXiv:1411.5726, 2014. [30] O. Vinyals, A. Toshev, S. Bengio, and D. Erhan. Show and Tell: A Neural Image Caption Generator. In CVPR, 2015. [31] P. J. Werbos. Generalization of Backpropagation with Application to a Recurrent Gas Market Model. Neural Networks, 1:339–356, 1988. [32] R. Xu, C. Xiong, W. Chen, and J. J. Corso. Jointly Modeling Deep Video and Compositional Text to Bridge Vision and Language in a Unified Framework. In AAAI, 2015. 9 | 2015 | 45 |
5,934 | Learning spatiotemporal trajectories from manifold-valued longitudinal data Jean-Baptiste Schiratti2,1, St´ephanie Allassonni`ere2, Olivier Colliot1, Stanley Durrleman1 1 ARAMIS Lab, INRIA Paris, Inserm U1127, CNRS UMR 7225, Sorbonne Universit´es, UPMC Univ Paris 06 UMR S 1127, Institut du Cerveau et de la Moelle ´epini`ere, ICM, F-75013, Paris, France 2CMAP, Ecole Polytechnique, Palaiseau, France jean-baptiste.schiratti@cmap.polytechnique.fr, stephanie.allassonniere@polytechnique.edu, olivier.colliot@upmc.fr,stanley.durrleman@inria.fr Abstract We propose a Bayesian mixed-effects model to learn typical scenarios of changes from longitudinal manifold-valued data, namely repeated measurements of the same objects or individuals at several points in time. The model allows to estimate a group-average trajectory in the space of measurements. Random variations of this trajectory result from spatiotemporal transformations, which allow changes in the direction of the trajectory and in the pace at which trajectories are followed. The use of the tools of Riemannian geometry allows to derive a generic algorithm for any kind of data with smooth constraints, which lie therefore on a Riemannian manifold. Stochastic approximations of the Expectation-Maximization algorithm is used to estimate the model parameters in this highly non-linear setting. The method is used to estimate a data-driven model of the progressive impairments of cognitive functions during the onset of Alzheimer’s disease. Experimental results show that the model correctly put into correspondence the age at which each individual was diagnosed with the disease, thus validating the fact that it effectively estimated a normative scenario of disease progression. Random effects provide unique insights into the variations in the ordering and timing of the succession of cognitive impairments across different individuals. 1 Introduction Age-related brain diseases, such as Parkinson’s or Alzheimer’s disease (AD) are complex diseases with multiple effects on the metabolism, structure and function of the brain. Models of disease progression showing the sequence and timing of these effects during the course of the disease remain largely hypothetical [3, 13]. Large databases have been collected recently in the hope to give experimental evidence of the patterns of disease progression based on the estimation of data-driven models. These databases are longitudinal, in the sense that they contain repeated measurements of several subjects at multiple time-points, but which do not necessarily correspond across subjects. Learning models of disease progression from such databases raises great methodological challenges. The main difficulty lies in the fact that the age of a given individual gives no information about the stage of disease progression of this individual. The onset of clinical symptoms of AD may vary from forty and eighty years of age, and the duration of the disease from few years to decades. Moreover, the onset of the disease does not correspond with the onset of the symptoms: according to recent studies, symptoms are likely to be preceded by a silent phase of the disease, for which little is known. As a consequence, statistical models based on the regression of measurements with age are inadequate to model disease progression. 1 The set of the measurements of a given individual at a specific time-point belongs to a highdimensional space. Building a model of disease progression amounts to estimating continuous subject-specific trajectories in this space and average those trajectories among a group of individuals. Trajectories need to be registered in space, to account for the fact that individuals follow different trajectories, and in time, to account for the fact that individuals, even if they follow the same trajectory, may be at a different position on this trajectory at the same age. The framework of mixed-effects models seems to be well suited to deal with this hierarchical problem. Mixed-effects models for longitudinal measurements were introduced in the seminal paper of Laird and Ware [15] and have been widely developed since then (see [6], [16] for instance). However, this kind of models suffers from two main drawbacks regarding our problem. These models are built on the estimation of the distribution of the measurements at a given time point. In many situations, this reference time is given by the experimental set up: date at which treatment begins, date of seeding in studies of plant growth, etc. In studies of ageing, using these models would require to register the data of each individual to a common stage of disease progression before being compared. Unfortunately, this stage is unknown and such a temporal registration is actually what we wish to estimate. Another limitation of usual mixed-effects models is that they are defined for data lying in Euclidean spaces. However, measurements with smooth constraints usually cannot be summed up or scaled, such as normalized scores of neurospychological tests, positive definite symmetric matrices, shapes encoded as images or meshes. These data are naturally modeled as points on Riemannian manifolds. Although the development of statistical models for manifold-valued data is a blooming topic, the construction of statistical models for longitudinal data on a manifold remains an open problem. The concept of “time-warp” was introduced in [8] to allow for temporal registration of trajectories of shape changes. Nevertheless, the combination of the time-warps with the intrinsic variability of shapes across individuals is done at the expense of a simplifying approximation: the variance of shapes does not depend on time whereas it should adapt with the average scenario of shape changes. Moreover, the estimation of the parameters of the statistical model is made by minimizing a sum of squares which results from an uncontrolled likelihood approximation. In [18], time-warps are used to define a metric between curves that are invariant under time reparameterization. This invariance, by definition, prevents the estimation of correspondences across trajectories, and therefore the estimation of distribution of trajectories in the spatiotemporal domain. In [17], the authors proposed a model for longitudinal image data but the model is not built on the inference of a statistical model and does not include a time reparametrization of the estimated trajectories. In this paper, we propose a generic statistical framework for the definition and estimation of mixedeffects models for longitudinal manifold-valued data. Using the tools of geometry allows us to derive a method that makes little assumptions about the data and problem to deal with. Modeling choices boil down to the definition of the metric on the manifold. This geometrical modeling also allows us to introduce the concept of parallel curves on a manifold, which is key to uniquely decompose differences seen in the data in a spatial and a temporal component. Because of the non-linearity of the model, the estimation of the parameters should be based on an adequate maximization of the observed likelihood. To address this issue, we propose to use a stochastic version of the ExpectationMaximization algorithm [5], namely the MCMC SAEM [2], for which theoretical results regarding the convergence have been proved in [4], [2]. Experimental results on neuropsychological tests scores and estimates of scenarios of AD progression are given in section 4. 2 Spatiotemporal mixed-effects model for manifold-valued data 2.1 Riemannian geometry setting The observed data consists in repeated multivariate measurements of p individuals. For a given individual, the measurements are obtained at time points ti,1 < . . . < ti,ni. The j-th measurement of the i-th individual is denoted by yi,j. We assume that each observation yi,j is a point on a N-dimensional Riemannian manifold M embedded in RP (with P ≥N) and equipped with a Riemannian metric gM. We denote ∇M the covariant derivative. We assume that the manifold is geodesically complete, meaning that geodesics are defined for all time. 2 We recall that a geodesic is a curve drawn on the manifold γ : R →M, which has no acceleration: ∇M ˙γ ˙γ = 0. For a point p ∈M and a vector v ∈TpM, the mapping ExpM p (v) denotes the Riemannian exponential, namely the point that is reached at time 1 by the geodesic starting at p with velocity v. The parallel transport of a vector X0 ∈Tγ(t0)M in the tangent space at point γ(t0) on a curve γ is a time-indexed family of vectors X(t) ∈Tγ(t)M which satisfies ∇M ˙ γ(t)X(t) = 0 and X(t0) = X0. We denote Pγ,t0,t(X0) the isometry that maps X0 to X(t). In order to describe our model, we need to introduce the notion of “parallel curves” on the manifold: Definition 1. Let γ be a curve on M defined for all time, a time-point t0 ∈R and a vector w ∈ Tγ(t0)M, w ̸= 0. One defines the curve s →ηw(γ, s), called parallel to the curve γ, as: ηw(γ, s) = ExpM γ(s) Pγ,t0,s(w) , s ∈R. The idea is illustrated in Fig. 1. One uses the parallel transport to move the vector w from γ(t0) to γ(s) along γ. At the point γ(s), a new point on M is obtained by taking the Riemannian exponential of Pγ,t0,s(w). This new point is denoted by ηw(γ, s). As s varies, one describes a curve ηw(γ, ·) on M, which can be understood as a “parallel” to the curve γ. It should be pointed out that, even if γ is a geodesic, ηw(γ, ·) is, in general, not a geodesic of M. In the Euclidean case (i.e. a flat manifold), the curve ηw(γ, ·) is the translation of the curve γ: ηw(γ, s) = γ(s) + w. Figure a) Figure b) Figure c) Figure 1: Model description on a schematic manifold. Figure a) (left) : a non-zero vector wi is choosen in Tγ(t0)M. Figure b) (middle) : the tangent vector wi is transported along the geodesic γ and a point ηwi(γ, s) is constructed at time s by use of the Riemannian exponential. Figure c) (right) : The curve ηwi(γ, ·) is the parallel resulting from the construction. 2.2 Generic spatiotemporal model for longitudinal data Our model is built in a hierarchical manner: data points are seen as samples along individual trajectories, and these trajectories derive from a group-average trajectory. The model writes yi,j = ηwi(γ, ψi(ti,j)) + εi,j, where we assume the group-average trajectory to be a geodesic, denoted γ from now on. Individual trajectories derive from the group average by spatiotemporal transformations. They are defined as a time re-parameterization of a trajectory that is parallel to the group-average: t →ηwi(γ, ψi(t)). For the ith individual, wi denotes a non-zero tangent vector in Tγ(t0)M, for some specific time point t0 that needs to be estimated, and which is orthogonal to the tangent vector ˙γ(t0) for the inner product given by the metric (⟨·, ·⟩γ(t0) = gM γ(t0)). The time-warp function ψi is defined as: ψi(t) = αi(t −t0 −τi) + t0. The parameter αi is an acceleration factor which encodes whether the i-th individual is progressing faster or slower than the average, τi is a time-shift which characterizes the advance or delay of the ith individual with respect to the average and wi is a space-shift which encodes the variability in the measurements across individuals at the same stage of progression. The normal tubular neighborhood theorem ([11]) ensures that parallel shifting defines a spatiotemporal coordinate system as long as the vectors wi are choosen orthogonal and sufficently small. The orthogonality condition on the tangent vectors wi is necessary to ensure the identifiability of the model. Indeed, if a vector wi was not choosen orthogonal, its orthogonal projection would play the same role as the acceleration factor.The spatial and temporal transformations commute, in the sense that one may re-parameterize the average trajectory before building the parallel curve, or vice versa. Mathematically, this writes ηwi(γ ◦ψi, s) = ηwi(γ, ψi(s)). This relation also explains the particular form of the affine time-warp ψi. The geodesic γ is characterized by the fact that it passes 3 at time-point t0 by point p0 = γ(t0) with velocity v0 = ˙γ(t0). Then, γ ◦ψi is the same trajectory, except that it passes by point p0 at time t0 + τi with velocity αiv0. The fixed effects of the model are the parameters of the average geodesic: the point p0 on the manifold, the time-point t0 and the velocity v0. The random effects are the acceleration factors αi, time-shifts τi and space-shifts wi. The first two random effects are scalars. One assumes the acceleration factors to follow a log-normal distribution (they need to be positive in order not to reverse time), and time-shifts to follow a zero-mean Gaussian distribution. Space-shifts are vectors of dimension N −1 in the hyperplane ˙γ(t0)⊥in Tγ(t0)M. In the spirit of independent component analysis [12], we assume that wi’s result from the superposition of Ns < N statistically independent components. This writes wi = Asi where A is a N × Ns matrix of rank Ns, si a vector of Ns independent sources following a heavy tail Laplace distribution with fixed parameter, and each column cj(A) (1 ≤j ≤Ns) of A satisfies the orthogonality condition ⟨cj(A), ˙γ(t0)⟩γ(t0) = 0. For the dataset (ti,j, yi,j) (1 ≤i ≤p, 1 ≤j ≤ni), the model may be summarized as: yi,j = ηwi(γ, ψi(ti,j)) + εi,j. (1) with ψi(t) = αi(t −t0 −τi) + t0, αi = exp(ξi), wi = Asi and ξi i.i.d. ∼ N(0, σ2 η), τi i.i.d. ∼ N(0, σ2 τ), εi,j i.i.d. ∼ N(0, σ2IN), si,l i.i.d. ∼ Laplace(1/2). Eventually, the parameters of the model one needs to estimate are the fixed effects and the variance of the random effects, namely θ = (p0, t0, v0, σξ, στ, σ, vec(A)). 2.3 Propagation model in a product manifold We wish to use these developments to study the temporal progression of a family of biomarkers. We assume that each component of yi,j is a scalar measurement of a given biomarker and belongs to a geodesically complete one-dimensional manifold (M, g). Therefore, each measurement yi,j is a point in the product manifold M = M N, which we assume to be equipped with the Riemannian product metric gM = g + . . . + g. We denote γ0 the geodesic of the one-dimensional manifold M which goes through the point p0 ∈M at time t0 with velocity v0 ∈Tp0M. In order to determine relative progression of the biomarkers among themselves, we consider a parametric family of geodesics of M : γδ(t) = γ0(t), γ0(t+δ1), . . . , γ0(t+δN−1) . We assume here that all biomarkers have on average the same dynamics but shifted in time. This hypothesis allows to model a temporal succession of effects during the course of the disease. The relative timing in biomarker changes is measured by the vector δ = 0, δ1, . . . , δN−1 , which becomes a fixed effect of the model. In this setting, a curve that is parallel to a geodesic γ is given by the following lemma : Lemma 1. Let γ be a geodesic of the product manifold M = M N and let t0 ∈R. If ηw(γ, ·) denotes a parallel to the geodesic γ with w = (w1, . . . , wN ∈Tγ(t0)M and γ(t) = (γ1(t), . . . , γN(t)), we have ηw(γ, s) = γ1 w1 ˙γ(t0) + s , . . . , γN wN ˙γ(t0) + s , s ∈R. As a consequence, a parallel to the average trajectory γδ has the same form as the geodesic but with randomly perturbed delays. The model (1) writes : for all k ∈{1, . . . , N}, yi,j,k = γ0 wk,i ˙γ0(t0 + δk−1) + αi(ti,j −t0 −τi) + t0 + δk−1 + εi,j,k. (2) where wk,i denotes the k-th component of the space-shift wi and yi,j,k, the measurement of the k-th biomarker, at the j-th time point, for the i-th individual. 2.4 Multivariate logistic curves model The propagation model given in (2) is now described for normalized biomarkers, such as scores of neuropsychological tests. In this case, we assume the manifold to be M =]0, 1[ and equipped with the Riemannian metric g given by : for p ∈]0, 1[, (u, v) ∈TpM × TpM, gp(u, v) = uG(p)v with G(p) = 1/(p2(1 −p)2). The geodesics given by this metric in the one-dimensional Riemannian manifold M are logistic curves of the form : γ0(t) = 1 + ( 1 p0 −1) exp − v0 p0(1−p0)(t −t0) −1 and leads to the multivariate logistic curves model in M. We can notice the quite unusual paramaterization of the logistic curve. This parametrization naturally arise because γ0 satisfies : γ0(t0) = p0 and ˙γ0(t0) = v0. In this case, the model (1) writes: 4 yi,j,k = 1 + 1 p0 −1 exp − v0αi(ti,j −t0 −τi) + v0δk + v0 (Asi)k ˙γ0(t0+δk) p0(1 −p0) !!−1 + εi,j,k, (3) where (Asi)k denotes the k-th component of the vector Asi. Note that (3) is not equivalent to a linear model on the logit of the observations. The logit transform corresponds to the Riemannian logarithm at p0 = 0.5. In our framework, p0 is not fixed, but estimated as a parameter of our model. Even with a fixed p0 = 0.5, the model is still non-linear due to the multiplication between random-effects αi and τi, and therefore does not boil down to the usual linear model [15]. 3 Parameters estimation In this section, we explain how to use a stochastic version of the Expectation-Maximization (EM) algorithm [5] to produce estimates of the parameters θ = (p0, t0, v0, δ, σξ, στ, σ, vec(A)) of the model. The algorithm detailed in this section is essentially the same as in [2]. Its scope of application is not limited to statistical models on product manifolds and the MCMC-SAEM algorithm can actually be used for the inference of a very large family of statistical models. The random effects z = (ξi, τi, sj,i) (1 ≤i ≤p and 1 ≤j ≤Ns) are considered as hidden variables. With the observed data y = (yi,j,k)i,j,k, (y, z) form the complete data of the model. In this context, the Expectation-Maximization (EM) algorithm proposed in [5] is very efficient to compute the maximum likelihood estimate of θ. Due to the nonlinearity and complexity of the model, the E step is intractable. As a consequence, we considered a stochastic version of the EM algorithm, namely the Monte-Carlo Markov Chain Stochastic Approximation Expectation-Maximization (MCMC-SAEM) algorithm [2], based on [4]. This algorithm is an EM-like algorithm which alternates between three steps: simulation, stochastic approximation and maximization. If θ(t) denotes the current parameter estimates of the algorithm, in the simulation step, a sample z(t) of the missing data is obtained from the transition kernel of an ergodic Markov chain whose stationary distribution is the conditional distribution of the missing data z knowing y and θ(t), denoted by q(z | y, θ(t)). This simulation step is achieved using Hasting-Metropolis within Gibbs sampler. Note that the high complexity of our model prevents us from resorting to sampling methods as in [10] as they would require heavy computations, such as the Fisher information matrix. The stochastic approximation step consists in a stochastic approximation on the complete log-likelihood log q(y, z | θ) summarized as follows : Qt(θ) = Qt−1(θ)+εt [log q(y, z | θ) −Qt−1(θ)], where (εt)t is a decreasing sequence of positive step-sizes in ]0, 1] which satisfies P t εt = +∞and P t ε2 t < +∞. Finally, the parameter estimates are updated in the maximization step according to: θ(t+1) = argmaxθ∈Θ Qt(θ). The theoretical convergence of the MCMC SAEM algorithm is proved only if the model belong to the curved exponential family. Or equivalently, if the complete log-likelihood of the model may be written : log q(y, z | θ) = −φ(θ) + S(y, z)⊤ψ(θ), where S(y, z) is a sufficent statistic of the model. In this case, the stochastic approximation on the complete log-likelihood can be replaced with a stochastic approximation on the sufficent statistics of the model. Note that the multivariate logistic curves model does not belong to the curved exponential family. A usual workaround consists in regarding the parameters of the model as realizations of independents Gaussian random variables ([14]) : θ ∼N(θ, D) where D is a diagonal matrix with very small diagonal entries and the estimation now targets θ. This yields: p0 ∼N(p0, σ2 p0), t0 ∼N(t0, σ2 t0), v0 ∼N(v0, σ2 v0) and, for all k, δk ∼N(δk, σ2 δ). To ensure the orthogonality condition on the columns of A, we assumed that A follows a normal distribution on the space Σ = {A = (c1(A), . . . , cNs(A)) ∈ Tγδ(t0)M Ns ; ∀j, ⟨cj(A), ˙γδ(t0)⟩γδ(t0) = 0}. Equivalently, we assume that the matrix A writes : A = P(N−1)Ns k=1 βkBk where, for all k, βk i.i.d. ∼N(βk, σ2 β) and (B1, . . . , B(N−1)Ns) is an orthonormal basis of Σ obtained by application of the Gram-Schmidt process to a basis of Σ. The random variables β1, . . . , β(N−1)Ns are considered as new hidden variables of the model. The parameters of the model are θ = (p0, t0, v0, (δk)1≤k≤N−1, (βk)1≤k≤(N−1)Ns, σξ, στ, σ) whereas the hidden variables of the model are z = (p0, t0, v0, (δk)1≤k≤N−1, (βk)1≤k≤(N−1)Ns, (ξi)1≤i≤p, (τi)1≤i≤p, (sj,i)1≤j≤Ns, 1≤i≤p). The algorithm (1) given below summarizes the SAEM algorithm for this model. The MCMC-SAEM algorithm 1 was tested on synthetic data generated according to (3). The MCMC-SAEM allowed to recover the parameters used to generate the synthetic dataset. 5 Algorithm 1 Overview of the MCMC SAEM algorithm for the multivariate logistic curves model. If z(k) = p(k) 0 , t(k) 0 , . . . , (s(k) j,i denotes the vector of hidden variables obtained in the simulation step of the k-th iteration of the MCMC SAEM, let fi,j = [fi,j,l] ∈RN and fi,j,l be the l-th component of ηw(k) i (γδ)(k), exp(ξ(k) i )(ti,j −t(k) 0 −τ (k) i ) + t(k) 0 and w(k) i = P(N−1)Ns l=1 β(k) l Bl. Initialization : θ ←θ(0) ; z(0) ←random ; S ←0 ; (εk)k≥0. repeat Simulation step : z(k) = p(k) 0 , t(k) 0 , . . . , (s(k) j,i )j,i ←Gibbs Sampler(z(k−1), y, θ(k−1)) Compute the sufficent statistics : S(k) 1 ← y⊤ i,jfi,j i,j ∈RK ; S(k) 2 ← ∥fi,j∥2 i,j ∈RK with (1 ≤i ≤p ; 1 ≤j ≤ni) and K = Pp i=1 ni ; S(k) 3 = h (ξ(k) i )2 i i ∈Rp ; S(k) 4 ← h (τ (k) i )2 i i ∈Rp ; S(k) 5 ←p(k) 0 ; S(k) 6 ←t(k) 0 ; S(k) 7 ←v(k) 0 ; S(k) 8 ← h δ(k) j i j ∈RN−1 ; S(k) 9 ← h β(k) j i j ∈R(N−1)Ns. Stochastic approximation step : S(k+1) j ←S(k) j + εk(S(y, z(k)) −S(k) j ) for j ∈{1, . . . , 9}. Maximization step : p0(k+1) ←S(k) 5 ; t0 (k+1) ←S(k) 6 ; v0(k+1) ←S(k) 7 ; δ (k+1) j ←(S(k) 8 )j for all 1 ≤j ≤N −1 ; β (k+1) j ←(S(k) 9 )j for all 1 ≤j ≤(N −1)Ns ; σ(k+1) ξ ←1 p(S(k) 3 )⊤1p ; σ(k+1) τ ←1 p(S(k) 4 )⊤1p ; σ(k+1) ← 1 √ NK P i,j,k y2 i,j,k −2(S(k) 1 )⊤1K + (S(k) 2 )⊤1K 1/2. until convergence. return θ. 4 Experiments 4.1 Data We use the neuropsychological assessment test “ADAS-Cog 13” from the ADNI1, ADNIGO or ADNI2 cohorts of the Alzheimer’s Disease Neuroimaging Initiative (ADNI) [1]. The “ADAS-Cog 13” consists of 13 questions, which allow to test the impairment of several cognitive functions. For the purpose of our analysis, these items are grouped into four categories: memory (5 items), language (5 items), praxis (2 items) and concentration (1 item). Scores within each category are added and normalized by the maximum possible score. Consequently, each data point consists in four normalized scores, which can be seen as a point on the manifold M =]0, 1[4. We included 248 individuals in the study, who were diagnosed with mild cognitive impairment (MCI) at their first visit and whose diagnosis changed to AD before their last visit. There is an average of 6 visits per subjects (min: 3, max: 11), with an average duration of 6 or 12 months between consecutive visits. The multivariate logistic curves model was used to analyze this longitudinal data. 4.2 Experimental results The model was applied with Ns = 1, 2 or 3 independent sources. In each experiment, the MCMC SAEM was run five times with different initial parameter values. The experiment which returned the smallest residual variance σ2 was kept. The maximum number of iterations was arbitrarily set to 5000 and the number of burn-in iterations was set to 3000 iterations. The limit of 5000 iterations is enough to observe the convergence of the sequences of parameters estimates. As a result, two and three sources allowed to decrease the residual variance better than one source (σ2 = 0.012 for one source, σ2 = 0.08 for two sources and σ2 = 0.084 for three sources). The residual variance σ2 = 0.012 (resp. σ2 = 0.08, σ2 = 0.084) mean that the model allowed to explain 79% (resp. 84%, 85%) of the total variance. We implemented our algorithm in MATLAB without any particular optimization scheme. The 5000 iterations require approximately one day. The number of parameters to be estimated is equal to 9 + 3Ns. Therefore, the number of sources do not dramatically impact the runtime. Simulation is the most computationally expensive part of 6 our algorithm. For each run of the Hasting-Metropolis algorithm, the proposal distribution is the prior distribution. As a consequence, the acceptation ratio simplifies [2] and one computation of the acceptation ratio requires two computations of the likelihood of the observations, conditionally on different vectors of latent variables and the vector of current parameters estimates. The runtime could be improved by parallelizing the sampling per individuals. For a matter of clarity and because the results obtained with three sources were similar to the results with two sources, we report here the experimental results obtained with two independent sources. The average model of disease progression γδ is plotted in Fig. 2. The estimated fixed effects are p0 = 0.3, t0 = 72 years, v0 = 0.04 unit per year, and δ = [0; −15; −13; −5] years. This means that, on average, the memory score (first coordinate) reaches the value p0 = 0.3 at t0 = 72 years, followed by concentration which reaches the same value at t0 + 5 = 77 years, and then by praxis and language at age 85 and 87 years respectively. Random effects show the variability of this average trajectory within the studied population. The standard deviation of the time-shift equals στ = 7.5 years, meaning that the disease progression model in Fig. 2 is shifted by ±7.5 years to account for the variability in the age of disease onset. The effects of the variance of the acceleration factors, and the two independent components of the space-shifts are illustrated in Fig. 4. The acceleration factors shows the variability in the pace of disease progression, which ranges between 7 times faster and 7 times slower than the average. The first independent component shows variability in the relative timing of the cognitive impairments: in one direction, memory and concentration are impaired nearly at the same time, followed by language and praxis; in the other direction, memory is followed by concentration and then language and praxis are nearly superimposed. The second independent component keeps almost fixed the timing of memory and concentration, and shows a great variability in the relative timing of praxis and language impairment. It shows that the ordering of the last two may be inverted in different individuals. Overall, these space-shift components show that the onset of cognitive impairment tends to occur by pairs: memory & concentration followed by language & praxis. Individual estimates of the random effects are obtained from the simulation step of the last iteration of the algorithm and are plotted in Fig. 5. The figure shows that the estimated individual time-shifts correspond well to the age at which individuals were diagnosed with AD. This means that the value p0 estimated by the model is a good threshold to determine diagnosis (a fact that has occurred by chance), and more importantly that the time-warp correctly register the dynamics of the individual trajectories so that the normalized age correspond to the same stage of disease progression across individuals. This fact is corroborated by Fig. 3 which shows that the normalized age of conversion to AD is picked at 77 years old with a small variance compared to the real distribution of age of conversion. Figure 2: The four curves represent the estimated average trajectory. A vertical line is drawn at t0 = 72 years old and an horizontal line is drawn at p0 = 0.3. Figure 3: In blue (resp. red) : histogram of the ages of conversion to AD (tdiag i ) (resp. normalized ages of conversion to AD (ψi(tdiag i ))), with ψi time-warp as in (1). 7 -σ +σ Acceleration factor 𝛼𝑖 Independent component 𝐴1 Independent component 𝐴2 Figure 4: Variability in disease progression superimposed with the average trajectory γδ (dotted lines): effects of the acceleration factor with plots of γδ exp(±σξ)(t −t0) + t0 (first column), first and second independent component of space-shift with plots of η±σsici(A)(γδ, ·) for i = 1 or 2 (second and third column respectively). Figure 5: Plots of individual random effects: log-acceleration factor ξi = log(αi) against time-shifts t0 + τi. Color corresponds to the age of conversion to AD. 4.3 Discussion and perspectives We proposed a generic spatiotemporal model to analyze longitudinal manifold-valued measurements. The fixed effects define a group-average trajectory, which is a geodesic on the data manifold. Random effects are subject-specific acceleration factor, time-shift and space-shift which provide insightful information about the variations in the direction of the individual trajectories and the relative pace at which they are followed. This model was used to estimate a normative scenario of Alzheimer’s disease progression from neuropsychological tests. We validated the estimates of the spatiotemporal registration between individual trajectories by the fact that they put into correspondence the same event on individual trajectories, namely the age at diagnosis. Alternatives to estimate model of disease progression include the event-based model [9], which estimates the ordering of categorical variables. Our model may be seen as a generalization of this model for continuous variables, which do not only estimate the ordering of the events but also the relative timing between them. Practical solutions to combine spatial and temporal sources of variations in longitudinal data are given in [7]. Our goal was here to propose theoretical and algorithmic foundations for the systematic treatment of such questions. 8 References [1] The Alzheimer’s Disease Neuroimaging Initiative, https://ida.loni.usc.edu/ [2] Allassonni`ere, S., Kuhn, E., Trouv´e, A.: Construction of bayesian deformable models via a stochastic approximation algorithm: a convergence study. Bernoulli 16(3), 641–678 (2010) [3] Braak, H., Braak, E.: Staging of alzheimer’s disease-related neurofibrillary changes. Neurobiology of aging 16(3), 271–278 (1995) [4] Delyon, B., Lavielle, M., Moulines, E.: Convergence of a stochastic approximation version of the em algorithm. Annals of statistics pp. 94–128 (1999) [5] Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the em algorithm. Journal of the royal statistical society. Series B (methodological) pp. 1–38 (1977) [6] Diggle, P., Heagerty, P., Liang, K.Y., Zeger, S.: Analysis of longitudinal data. Oxford University Press (2002) [7] Donohue, M.C., Jacqmin-Gadda, H., Le Goff, M., Thomas, R.G., Raman, R., Gamst, A.C., Beckett, L.A., Jack, C.R., Weiner, M.W., Dartigues, J.F., Aisen, P.S., the Alzheimer’s Disease Neuroimaging Initiative: Estimating long-term multivariate progression from short-term data. Alzheimer’s & Dementia 10(5), 400– 410 (2014) [8] Durrleman, S., Pennec, X., Trouv´e, A., Braga, J., Gerig, G., Ayache, N.: Toward a comprehensive framework for the spatiotemporal statistical analysis of longitudinal shape data. International Journal of Computer Vision 103(1), 22–59 (2013) [9] Fonteijn, H.M., Modat, M., Clarkson, M.J., Barnes, J., Lehmann, M., Hobbs, N.Z., Scahill, R.I., Tabrizi, S.J., Ourselin, S., Fox, N.C., et al.: An event-based model for disease progression and its application in familial alzheimer’s disease and huntington’s disease. NeuroImage 60(3), 1880–1889 (2012) [10] Girolami, M., Calderhead, B.: Riemann manifold langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 73(2), 123–214 (2011) [11] Hirsch, M.W.: Differential topology. Springer Science & Business Media (2012) [12] Hyv¨arinen, A., Karhunen, J., Oja, E.: Independent component analysis, vol. 46. John Wiley & Sons (2004) [13] Jack, C.R., Knopman, D.S., Jagust, W.J., Shaw, L.M., Aisen, P.S., Weiner, M.W., Petersen, R.C., Trojanowski, J.Q.: Hypothetical model of dynamic biomarkers of the alzheimer’s pathological cascade. The Lancet Neurology 9(1), 119–128 (2010) [14] Kuhn, E., Lavielle, M.: Maximum likelihood estimation in nonlinear mixed effects models. Computational Statistics & Data Analysis 49(4), 1020–1038 (2005) [15] Laird, N.M., Ware, J.H.: Random-effects models for longitudinal data. Biometrics pp. 963–974 (1982) [16] Singer, J.D., Willett, J.B.: Applied longitudinal data analysis: Modeling change and event occurrence. Oxford university press (2003) [17] Singh, N., Hinkle, J., Joshi, S., Fletcher, P.T.: A hierarchical geodesic model for diffeomorphic longitudinal shape analysis. In: Information Processing in Medical Imaging. pp. 560–571. Springer (2013) [18] Su, J., Kurtek, S., Klassen, E., Srivastava, A., et al.: Statistical analysis of trajectories on riemannian manifolds: Bird migration, hurricane tracking and video surveillance. The Annals of Applied Statistics 8(1), 530–552 (2014) 9 | 2015 | 46 |
5,935 | Fast Classification Rates for High-dimensional Gaussian Generative Models Tianyang Li Adarsh Prasad Department of Computer Science, UT Austin {lty,adarsh,pradeepr}@cs.utexas.edu Pradeep Ravikumar Abstract We consider the problem of binary classification when the covariates conditioned on the each of the response values follow multivariate Gaussian distributions. We focus on the setting where the covariance matrices for the two conditional distributions are the same. The corresponding generative model classifier, derived via the Bayes rule, also called Linear Discriminant Analysis, has been shown to behave poorly in high-dimensional settings. We present a novel analysis of the classification error of any linear discriminant approach given conditional Gaussian models. This allows us to compare the generative model classifier, other recently proposed discriminative approaches that directly learn the discriminant function, and then finally logistic regression which is another classical discriminative model classifier. As we show, under a natural sparsity assumption, and letting s denote the sparsity of the Bayes classifier, p the number of covariates, and n the number of samples, the simple (ℓ1-regularized) logistic regression classifier achieves the fast misclassification error rates of O s log p n , which is much better than the other approaches, which are either inconsistent under high-dimensional settings, or achieve a slower rate of O q s log p n . 1 Introduction We consider the problem of classification of a binary response given p covariates. A popular class of approaches are statistical decision-theoretic: given a classification evaluation metric, they then optimize a surrogate evaluation metric that is computationally tractable, and yet have strong guarantees on sample complexity, namely, number of observations required for some bound on the expected classification evaluation metric. These guarantees and methods have been developed largely for the zero-one evaluation metric, and extending these to general evaluation metrics is an area of active research. Another class of classification methods are relatively evaluation metric agnostic, which is an important desideratum in modern settings, where the evaluation metric for an application is typically less clear: these are based on learning statistical models over the response and covariates, and can be categorized into two classes. The first are so-called generative models, where we specify conditional distributions of the covariates conditioned on the response, and then use the Bayes rule to derive the conditional distribution of the response given the covariates. The second are the socalled discriminative models, where we directly specify the conditional distribution of the response given the covariates. In the classical fixed p setting, we have now have a good understanding of the performance of the classification approaches above. For generative and discriminative modeling based approaches, consider the specific case of Naive Bayes generative models and logistic regression discriminative models (which form a so-called generative-discriminative pair1), Ng and Jordan [27] provided qual1In such a so-called generative-discriminative pair, the discriminative model has the same form as that of the conditional distribution of the response given the covariates specified by the Bayes rule given the generative model 1 itative consistency analyses, and showed that under small sample settings, the generative model classifiers converge at a faster rate to their population error rate compared to the discriminative model classifiers, though the population error rate of the discriminative model classifiers could be potentially lower than that of the generative model classifiers due to weaker model assumptions. But if the generative model assumption holds, then generative model classifiers seem preferable to discriminative model classifiers. In this paper, we investigate whether this conventional wisdom holds even under high-dimensional settings. We focus on the simple generative model where the response is binary, and the covariates conditioned on each of the response values, follows a conditional multivariate Gaussian distribution. We also assume that the two covariance matrices of the two conditional Gaussian distributions are the same. The corresponding generative model classifier, derived via the Bayes rule, is known in the statistics literature as the Linear Discriminant Analysis (LDA) classifier [21]. Under classical settings where p ≪n, the misclassification error rate of this classifier has been shown to converge to that of the Bayes classifier. However, in a high-dimensional setting, where the number of covariates p could scale with the number of samples n, this performance of the LDA classifier breaks down. In particular, Bickel and Levina [3] show that when p/n →∞, then the LDA classifier could converge to an error rate of 0.5, that of random chance. What should one then do, when we are even allowed this generative model assumption, and when p > n? Bickel and Levina [3] suggest the use of a Naive Bayes or conditional independence assumption, which in the conditional Gaussian context, assumes the covariance matrices to be diagonal. As they showed, the corresponding Naive Bayes LDA classifier does have misclassification error rate that is better than chance, but it is asymptotically biased: it converges to an error rate that is strictly larger than that of the Bayes classifier when the Naive Bayes conditional independence assumption does not hold. Bickel and Levina [3] also considered a weakening of the Naive Bayes rule, by assuming that the covariance matrix is weakly sparse, and an ellipsoidal constraint on the means, showed that an estimator that leverages these structural constraints converges to the Bayes risk at a rate of O(log(n)/nγ), where 0 < γ < 1 depends on the mean and covariance structural assumptions. A caveat is that these covariance sparsity assumptions might not hold in practice. Similar caveats apply to the related works on feature annealed independence rules [14], nearest shrunken centroids [29, 30], as well as those . Moreover, even when the assumptions hold, they do not yield the “fast” rates of O(1/n). An alternative approach is to directly impose sparsity on the linear discriminant [28, 7], which is weaker than the covariance sparsity assumptions (though [28] impose these in addition). [28, 7] then proposed new estimators that leveraged these assumptions, but while they were able to show convergence to the Bayes risk, they were only able to show a slower rate of O q s log p n . It is instructive at this juncture to look at recent results on classification error rates from the machine learning community. A key notion of importance here is whether the two classes are separable: which can be understood as requiring that the classification error of the Bayes classifier is 0. Classical learning theory gives a rate of O(1/√n) for any classifier when two classes are non-separable, and it shown that this is also minimax [12], with the note that this is relatively distribution agnostic, since it assumes very little on the underlying distributions. When the two classes are non-separable, only rates slower than Ω(1/n) are known. Another key notion is a “low-noise condition” [25], under which certain classifiers can be shown to attain a rate faster than o(1/√n), albeit not at the O(1/n) rate unless the two classes are separable. Specifically, let α denote a constant such that P (|P(Y = 1|X) −1/2| ≤t) ≤O (tα) , (1) holds when t →0. This is said to be a low-noise assumption, since as α →+∞, the two classes start becoming separable, that is, the Bayes risk approaches zero. Under this low-noise assumption, known rates for excess 0-1 risk is O ( 1 n) 1+α 2+α [23]. Note that this is always slower than O( 1 n) when α < +∞. There has been a surge of recent results on high-dimensional statistical statistical analyses of Mestimators [26, 9, 1]. These however are largely focused on parameter error bounds, empirical and population log-likelihood, and sparsistency. In this paper however, we are interested in analyzing the zero-one classification error under high-dimensional sampling regimes. One could stitch these recent results to obtain some error bounds: use bounds on the excess log-likelihood, and use trans2 forms from [2], to convert excess log-likelihood bounds to get bounds on 0-1 classification error, however, the resulting bounds are very loose, and in particular, do not yield the fast rates that we seek. In this paper, we leverage the closed form expression for the zero-one classification error for our generative model, and directly analyse it to give faster rates for any linear discriminant method. Our analyses show that, assuming a sparse linear discriminant in addition, the simple ℓ1-regularized logistic regression classifier achieves near optimal fast rates of O s log p n , even without requiring that the two classes be separable. 2 Problem Setup We consider the problem of high dimensional binary classification under the following generative model. Let Y ∈{0, 1} denote a binary response variable, and let X = (X1, . . . , Xp) ∈Rp denote a set of p covariates. For technical simplicity, we assume Pr[Y = 1] = Pr[Y = 0] = 1 2, however our analysis easily extends to the more general case when Pr[Y = 1], Pr[Y = 0] ∈[δ0, 1 −δ0], for some constant 0 < δ0 < 1 2. We assume that X|Y ∼N(µY , ΣY ), i.e. conditioned on a response, the covariate follows a multivariate Gaussian distribution. We assume we are given n training samples {(X(1), Y (1)), (X(2), Y (2)), . . . , (X(n), Y (n))} drawn i.i.d. from the conditional Gaussian model above. For any classifier, C : Rp →{1, 0}, the 0-1 risk or simply the classification error is given by R0−1(C) = EX,Y [ℓ0−1(C(X), Y )], where ℓ0−1(C(x), y) = 1(C(x) ̸= y) is the 0-1 loss. It can also be simply written as R(C) = Pr[C(X) ̸= Y ]. The classifier attaining the lowest classification error is known as the Bayes classifier, which we will denote by C∗. Under the generative model assumption above, the Bayes classifier can be derived simply as C∗(X) = 1(log Pr[Y =1|X] Pr[Y =0|X] > 0), so that given sample X, it would be classified as 1 if Pr[Y =1|X] Pr[Y =0|X] > 1, and as 0 otherwise. We denote the error of the Bayes classifier R∗= R(C∗). When Σ1 = Σ0 = Σ, log Pr[Y = 1|X] Pr[Y = 0|X] = (µ1 −µ0)T Σ−1X + 1 2(−µT 1 Σ−1µ1 + µT 0 Σ−1µ0) (2) and we denote this quantity as w∗T X + b∗where w∗= Σ−1(µ1 −µ0), b∗= −µT 1 Σ−1µ1 + µT 0 Σ−1µ0 2 , so that the Bayes classifier can be written as: C∗(x) = 1(w∗T x + b∗> 0). For any trained classifier ˆC we are interested in bounding the excess risk defined as R( ˆC) −R∗. The generative approach to training a classifier is to estimate estimate Σ−1and δ from data, and then plug the estimates into Equation 2 to construct the classifier. This classifier is known as the linear discriminant analysis (LDA) classifier, whose theoretical properties have been well-studied in classical fixed p setting. The discriminative approach to training is to estimate Pr[Y =1|X] Pr[Y =0|X] directly from samples. 2.1 Assumptions. We assume that mean is bounded i.e. µ1, µ0 ∈{µ ∈Rp : ∥µ∥2 ≤Bµ}, where Bµ is a constant which doesn’t scale with p. We assume that the covariance matrix Σ is non-degenerate i.e. all eigenvalues of Σ are in [Bλmin, Bλmax]. Additionally we assume (µ1 −µ0)T Σ−1(µ1 −µ0) ≥Bs, which gives a lower bound on the Bayes classifier’s classification error R∗≥1 −Φ( 1 2Bs) > 0. Note that this assumption is different from the definition of separable classes in [11] and the low noise condition in [25], and the two classes are still not separable because R∗> 0. 2.1.1 Sparsity Assumption. Motivated by [7], we assume that Σ−1(µ1 −µ0) is sparse, and there at most s non-zero entries. Cai and Liu [7] extensively discuss and show that such a sparsity assumption, is much weaker than assuming either Σ−1 and (µ1 −µ0) to be individually sparse. We refer the reader to [7] for an elaborate discussion. 3 2.2 Generative Classifiers Generative techniques work by estimating Σ−1 and (µ1 −µ0) from data and plugging them into Equation 2. In high-dimensions, simple estimation techniques do not perform well when p ≫n, the sample estimate for the covariance matrix ˆΣ is singular; using the generalized inverse of the sample covariance matrix makes the estimator highly biased and unstable. Numerous alternative approaches have been proposed by imposing structural conditions on Σ or Σ−1 and δ to ensure that they can be estimated consistently. Some early work based on nearest shrunken centroids [29, 30], feature annealed independence rules [14], and Naive Bayes [4] imposed independence assumptions on Σ, which are often violated in real-world applications. [4, 13] impose more complex structural assumptions on the covariance matrix and suggest more complicated thresholding techniques. Most commonly, Σ−1 and δ are assumed to be sparse and then some thresholding techniques are used to estimate them consistently [17, 28]. 2.3 Discriminative Classifiers. Recently, more direct techniques have been proposed to solve the sparse LDA problem. Let ˆΣ and ˆµd be consistent estimators of Σ and µ = µ1+µ0 2 . Fan et al. [15] proposed the Regularized Optimal Affine Discriminant (ROAD) approach which minimizes wT Σw with wT µ restricted to be a constant value and an ℓ1-penalty of w. wROAD = argmin wT ˆµ=1 ||w||1≤c wT ˆΣw (3) Kolar and Liu [22] provided theoretical insights into the ROAD estimator by analysing its consistency for variable selection. Cai and Liu [7] proposed another variant called linear programming discriminant (LPD) which tries to make w close to the Bayes rules linear term Σ−1(µ1 −µ0) in the ℓ∞norm. This can be cast as a linear programming problem related to the Dantzig selector.[8]. wLPD = argmin w ||w||1 (4) s.t.|| ˆΣw −ˆµ||∞≤λn Mai et al. [24]proposed another version of the sparse linear discriminant analysis based on an equivalent least square formulation of the LDA, where they solve an ℓ1-regularized least squares problem to produce a consistent classifier. All the techniques above either do not have finite sample convergence rates, or the 0-1 risk converged at a slow rate of O q s log p n . In this paper, we first provide an analysis of classification error rates for any classifier with a linear discriminant function, and then follow this analysis by investigating the performance of generative and discriminative classifiers for conditional Gaussian model. 3 Classifiers with Sparse Linear Discriminants We first analyze any classifier with a linear discriminant function, of the form: C(x) = 1(wT x+b > 0). We first note that the 0-1 classification error of any such classifier is available in closed-form as R(w, b) = 1 −1 2Φ wT µ1 + b √ wT Σw −1 2Φ −wT µ0 + b √ wT Σw , (5) which can be shown by noting that wT X + b is a univariate normal random variable when conditioned on the label Y . Next, we relate the 0-1 classifiction error above to that of the Bayes classifier. Recall the earlier notation of the Bayes classifier as C∗(x) = 1(xT w∗+ b∗> 0). The following theorem is a key result of the paper that shows that for any linear discriminant classifier whose linear discriminant parameters are close to that of the Bayes classifier, the excess 0-1 risk is bounded only by second order terms of the difference. Note that this theorem will enable fast classification rates if we obtain fast rates for the parameter error. Theorem 1. Let w = w∗+ ∆, b = b∗+ Ω, and ∆→0, Ω→0, then we have R(w = w∗+ ∆, b = b∗+ Ω) −R(w∗, b∗) = O(∥∆∥2 2 + Ω2). (6) 4 Proof. Denote the quantity S∗= p (µ1 −µ0)T Σ−1(µ1 −µ0), then we have µT 1 w∗+b∗ √ w∗T Σw∗ = −µT 1 w∗−b∗ √ w∗T Σw∗= 1 2S∗. Using (5) and the Taylor series expansion of Φ(·) around 1 2S∗, we have |R(w, b) −R(w∗, b∗)| = 1 2|(Φ( µT 1 w + b √ wT Σw ) −Φ(1 2S∗)) + (Φ(−µT 0 w −b √ wT Σw ) −Φ(1 2S∗))| (7) ≤K1|(µ1 −µ0)T w √ wT Σw −S∗| + K2( µT 1 w + b √ wT Σw −1 2S∗)2 + K3(−µT 0 w −b √ wT Σw −1 2S∗)2 where K1, K2, K3 > 0 are constants because the first and second order derivatives of Φ(·) are bounded. First note that | √ wT Σw − √ w∗T Σw∗| = O(∥∆∥2), because ∥w∗∥2 is bounded. Denote w′′ = Σ 1 2 w, ∆′′ = Σ 1 2 ∆, w′′∗= Σ 1 2 w∗a′′ = Σ−1 2 (µ1 −µ0), we have (by the binomial Taylor series expansion) (µ1 −µ0)T w √ wT Σw −S∗= a′′T w′′ √ w′′T w′′ − p a′′T a′′ (8) = 1 + a′′T ∆′′ a′′T a′′ − q 1 + 2 a′′T ∆′′ a′′T a′′ + ∆′′T ∆′′ a′′T a′′ √ w′′T w′′ a′′T a′′ = O( ∥∆′′∥2 2 √ a′′T a′′ ) Note that w′′ →a′′, ∆′′ →0, ∥∆∥2 = Θ(∥∆′′∥2), and S∗is lower bouned, we have | (µ1−µ0)T w √ wT Σw − S∗| = O(∥∆∥2 2). Next we bound | µT 1 w+b √ wT Σw −1 2S∗|: | µT 1 w + b √ wT Σw −1 2S∗| = |(µT 1 w∗+ b∗)( √ w∗T Σw∗− √ wT Σw) + √ w∗T Σw∗(µT 1 ∆+ Ω) √ wT Σw √ w∗T Σw∗ | (9) = O( q ∥∆∥2 2 + Ω2) where we use the fact that |µT 1 w∗+ b∗| and S∗are bounded. Similarly | −µT 0 w−b √ wT Σw −1 2S∗| = O( p ∥∆∥2 2 + Ω2). Combing the above bounds we get the desired result. 4 Logistic Regression Classifier In this section, we show that the simple ℓ1 regularized logistic regression classifier attains fast classification error rates. Specifically, we are interested in the M-estimator [21] below: ( ˆw,ˆb) = arg min w,b 1 n X (Y (i)(wT X(i) + b) + log(1 + exp(wT X(i) + b))) + λ(∥w∥1 + |b|) , (10) which maximizes the penalized log-likelihood of the logistic regression model, which also corresponds to the conditional probability of the response given the covariates P(Y |X) for the conditional Gaussian model. Note that here we penalize the intercept term b as well. Although the intercept term usually is not penalized (e.g. [19]), some packages (e.g. [16]) penalize the intercept term. Our analysis show that penalizing the intercept term does not degrade the performance of the classifier. In [2, 31] it is shown that minimizing the expected risk of the logistic loss also minimizes the classification error for the corresponding linear classifier. ℓ1 regularized logistic regression is a popular classification method in many settings [18, 5]. Several commonly used packages ([19, 16]) have been developed for ℓ1 regularized logistic regression. And recent works ([20, 10]) have been on scaling regularized logistic regression to ultra-high dimensions and large number of samples. 5 4.1 Analysis We first show that ℓ1 regularized logistic regression estimator above converges to the Bayes classifier parameters using techniques. Next we use the theorem from the previous section to argue that since estimated parameter ˆw, ˆb is close to the Bayes classifier’s parameter w∗, b∗, the excess risk of the classifier using estimated parameter is tightly bounded as well. For the first step, we first show a restricted eigenvalue condition for X′ = (X, 1) where X are our covariates, that comes from a mixture of two Gaussian distributions 1 2N(µ1, Σ)+ 1 2N(µ0, Σ). Note that X′ is not zero centered, which is different from existing scenarios ([26], [6], etc.) that assume covariates are zero centered. And we denote w′ = (w, b), S′ = {i : w′∗ i ̸= 0}, and s′ = |S′| ≤s+1. Lemma 1. With probability 1 −δ, ∀v′ ∈A′ ⊆{v′ ∈Rp+1 : ∥v′∥2 = 1}, for some constants κ1, κ2, κ3 > 0 we have 1 n∥X′v′∥2 ≥κ1 √n −κ2w(A′) −κ3 r log 1 δ (11) where w(A′) = Eg′∼N (0,Ip+1)[supa′∈A′ g′T a′] is the Gaussian width of A′. In the special case when A′ = {v′ : ∥v′ ¯ S′∥1 ≤3∥v′ S′∥1, ∥v′∥2 = 1}, we have w(A′) = O(√s log p). Proof. First note that X′ is sub-Gaussian with bounded parameter and Σ′ = E[ 1 nX′T X′] = Σ + 1 2(µ1µT 1 + µ0µT 0 ) 1 2(µ1 + µ0) 1 2(µT 1 + µT 0 ) 1 (12) Note that AΣ′AT = Σ + 1 4(µ1 −µ0)T (µ1 −µ0) 0 0 1 where A = Ip −1 2(µ1 + µ0) 0 1 , and A−1 = Ip 1 2(µ1 + µ0) 0 1 . Notice that AAT = Ip 0 0 0 + −1 2(µ1 + µ0) 1 −1 2(µ1 + µ0)T 1 and A−1A−T = Ip 0 0 0 + 1 2(µ1 + µ0) 1 1 2(µ1 + µ0)T 1 , and we can see that the singular values of A and A−1 are lower bounded by 1 √ 2+B2µ and upper bounded by q 2 + B2µ. Let λ1 be the minimum eigenvalue of Σ′, and u′ 1 (∥u′ 1∥2 = 1) the corresponding eigenvector. From the expression AΣAT A−T u′ 1 = λ1Au′ 1, so we know that the minimum eigenvalue of Σ′ is lower bounded. Similarly the largest eigenvalue of Σ′ is upper bounded. Then the desired result follows the proof of Theorem 10 in [1]. Although the proof of Theorem 10 in [1] is for zero-centered random variables, the proof remains valid for non zero-centered random variables. When A′ = {v′ : ∥v′ ¯ S′∥1 ≤3∥v′ S′∥1, ∥v′∥2 = 1}, [9] gives w(A′) = O(√s log p). Having established a restricted eigenvalue result in Lemma 1, next we use the result in [26] for parameter recovery in generalized linear models (GLMs) to show that ℓ1 regularized logistic regression can recover the Bayes classifier parameters. Lemma 2. When the number of samples n ≫s′ log p, and choose λ = c0 q log p n for some constant c0, then we have ∥w∗−ˆw∥2 2 + (b∗−ˆb)2 = O(s′ log p n ) (13) with probability at least 1 −O( 1 pc1 + 1 nc2 ), where c1, c2 > 0 are constants. Proof. Following the proof of Lemma 1, we see that the conditions (GLM1) and (GLM2) in [26] are satisfied. Following the proof of Proposition 2 and Corollary 5 in [26], we have the desired result. Although the proof of Proposition 2 and Corollary 5 in [26] is for zero-centered random variables, the proof remains valid for non zero-centered random variables. Combining Lemma 2 and Theorem 1 we have the following theorem which gives a fast rate for the excess 0-1 risk of a classifier trained using ℓ1 regularized logistic regression. 6 Theorem 2. With probability at least 1 −O( 1 pc1 + 1 nc2 ) where c1, c2 > 0 are constants, when we set λ = c0 q log p n for some constant c0 the Lasso estimate ˆw, ˆb in (10) satisfies R( ˆw,ˆb) −R(w∗, b∗) = O(s log p n ) (14) Proof. This follows from Lemma 2 and Theorem 1. 5 Other Linear Discriminant Classifiers In this section, we provide convergence results for the 0-1 risk for other linear discriminant classifiers discussed in Section 2.3. Naive Bayes We compare the discriminative approach using ℓ1 logistic regression to the generative approach using naive Bayes. For illustration purposes we conside the case where Σ = Ip, µ1 = M1 √s 1s 0p−s and µ0 = −M0 √s 1s 0p−s . where 0 < B1 ≤M1, M0 ≤B2 are unknown but bounded constants. In this case w∗= M1+M0 √s 1s 0p−s and b∗= 1 2(−M 2 1 + M 2 0 ). Using naive Bayes we estimate ˆw = ¯µ1 −¯µ0, where ¯µ1 = 1 P 1(Y (i)=1) P Y (i)=1 X(i) and ¯µ0 = 1 P 1(Y (i)=0) P Y (i)=0 X(i). Thus with high probability, we have ∥ˆw −w∗∥2 2 = O( p n), using Theorem 1 we get a slower rate than the bound given in Theorem 2 for discriminative classification using ℓ1 regularized logistic regression. LPD [7] LPD uses a linear programming similar to the Dantzig selector. Lemma 3 (Cai and Liu [7], Theorem 4). Let λn = C q s log p n with C being a sufficiently large constant. Let n > log p, let ∆= (µ1 −µ0)T Σ−1(µ1 −µ0) > c1 for some constant c1 > 0, and let wLPD be obtained as in Equation 4, then with probability greater than 1 −O(p−1), we have R(wLPD) R∗ −1 = O( q s log p n ). SLDA [28] SLDA uses thresholded estimate for Σ and µ1 −µ0. We state a simpler version. Lemma 4 ([28], Theorem 3). Assume that Σ and µ1 −µ0 are sparse, then we have R(wSLDA) R∗ −1 = O(max(( s log p n )α1, ( S log p n )α2) with high probability, where s = ∥µ1 −µ0∥0, S is the number of non-zero entries in Σ, and α1, α2 ∈(0, 1 2) are constants. ROAD [15] ROAD minimizes wT Σw with wT µ restricted to be a constant value and an ℓ1-penalty of w. Lemma 5 (Fan et al. [15], Theorem 1). Assume that with high probability, || ˆΣ−Σ||∞= O( q log p n ) and ||ˆµ−µ||∞= O( q log p n ), and let wROAD be obtained as in Equation 3, then with high probability, we have R(wROAD) −R∗= O( q s log p n ). 6 Experiments In this section we describe experiments which illustrate the rates for excess 0-1 risk given in Theorem 2. In our experiments we use Glmnet [19] where we set the option to penalize the intercept term along with all other parameters. Glmnet is popular package for ℓ1 regularized logistic regression using coordinate descent methods. For illustration purposes in all simulations we use Σ = Ip, µ1 = 1p + 1 √s 1s 0p−s , µ0 = 1p − 1 √s 1s 0p−s To illustrate our bound in Theorem 2, we consider three different scenarios. In Figure 1a 7 0 100 200 300 400 0.2 0.25 0.3 0.35 0.4 0.45 0.5 n/log(p) classification error p=100 p=400 p=1600 (a) Only varying p. 0 100 200 300 400 0.2 0.25 0.3 0.35 0.4 0.45 0.5 n/s classification error s=5 s=10 s=15 s=20 (b) Only varying s. 0 0.005 0.01 0.015 0 0.05 0.1 0.15 0.2 0.25 1/n excess 0−1 risk (c) Dependence of excess 0-1 risk on n. Figure 1: Simulations for different Gaussian classification problems showing the dependence of classification error on different quantities. All experiments plotted the average of 20 trials. In all experiments we set the regularization parameter λ = q log p n . we vary p while keeping s, (µ1 −µ0)T Σ−1(µ1 −µ0) constant. Figure 1a shows for different p how the classification error changes with increasing n. In Figure 1a we show the relationship between the classification error and the quantity n log p. This figure agrees with our result on excess 0-1 risk’s dependence on p. In Figure 1b we vary s while keeping p, (µ1 −µ0)T Σ−1(µ1 −µ0) constant. Figure 1b shows for different s how the classification error changes with increasing n. In Figure 1a we show the relationship between the classification error and the quantity n s . This figure agrees with our result on excess 0-1 risk’s dependence on s. In Figure 1c we show how R( ˆw,ˆb) −R(w∗, b∗) changes with respect to 1 n in one instance Gaussian classification. We can see that the excess 0-1 risk achieves the fast rate and agrees with our bound. Acknowledgements We acknowledge the support of ARO via W911NF-12-1-0390 and NSF via IIS-1149803, IIS1320894, IIS-1447574, and DMS-1264033, and NIH via R01 GM117594-01 as part of the Joint DMS/NIGMS Initiative to Support Research at the Interface of the Biological and Mathematical Sciences. References [1] Arindam Banerjee, Sheng Chen, Farideh Fazayeli, and Vidyashankar Sivakumar. Estimation with norm regularization. In Advances in Neural Information Processing Systems, pages 1556–1564, 2014. [2] Peter L Bartlett, Michael I Jordan, and Jon D McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006. [3] Peter J Bickel and Elizaveta Levina. Some theory for Fisher’s linear discriminant function, ‘naive Bayes’, and some alternatives when there are many more variables than observations. Bernoulli, pages 989–1010, 2004. [4] Peter J Bickel and Elizaveta Levina. Covariance regularization by thresholding. The Annals of Statistics, pages 2577–2604, 2008. [5] C.M. Bishop. Pattern Recognition and Machine Learning. Information Science and Statistics. Springer, 2006. ISBN 9780387310732. [6] Peter B¨uhlmann and Sara Van De Geer. Statistics for high-dimensional data: methods, theory and applications. Springer Science & Business Media, 2011. [7] Tony Cai and Weidong Liu. A direct estimation approach to sparse linear discriminant analysis. Journal of the American Statistical Association, 106(496), 2011. [8] Emmanuel Candes and Terence Tao. The dantzig selector: statistical estimation when p is much larger than n. The Annals of Statistics, pages 2313–2351, 2007. [9] Venkat Chandrasekaran, Benjamin Recht, Pablo A Parrilo, and Alan S Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. 8 [10] Weizhu Chen, Zhenghao Wang, and Jingren Zhou. Large-scale L-BFGS using MapReduce. In Advances in Neural Information Processing Systems, pages 1332–1340, 2014. [11] L. Devroye, L. Gy¨orfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer New York, 1996. [12] Luc Devroye. A probabilistic theory of pattern recognition, volume 31. Springer Science & Business Media, 1996. [13] David Donoho and Jiashun Jin. Higher criticism thresholding: Optimal feature selection when useful features are rare and weak. Proceedings of the National Academy of Sciences, 105(39):14790–14795, 2008. [14] Jianqing Fan and Yingying Fan. High dimensional classification using features annealed independence rules. Annals of statistics, 36(6):2605, 2008. [15] Jianqing Fan, Yang Feng, and Xin Tong. A road to classification in high dimensional space: the regularized optimal affine discriminant. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 74(4):745–771, 2012. [16] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research, 9:1871–1874, 2008. [17] Yingying Fan, Jiashun Jin, Zhigang Yao, et al. Optimal classification in sparse gaussian graphic model. The Annals of Statistics, 41(5):2537–2571, 2013. [18] Manuel Fern´andez-Delgado, Eva Cernadas, Sen´en Barro, and Dinani Amorim. Do we need hundreds of classifiers to solve real world classification problems? The Journal of Machine Learning Research, 15 (1):3133–3181, 2014. [19] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. Journal of statistical software, 33(1):1, 2010. [20] Siddharth Gopal and Yiming Yang. Distributed training of Large-scale Logistic models. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 289–297, 2013. [21] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2009. [22] Mladen Kolar and Han Liu. Feature selection in high-dimensional classification. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 329–337, 2013. [23] Vladimir Koltchinskii. Oracle Inequalities in Empirical Risk Minimization and Sparse Recovery Problems: Ecole dEt´e de Probabilit´es de Saint-Flour XXXVIII-2008, volume 2033. Springer Science & Business Media, 2011. [24] Qing Mai, Hui Zou, and Ming Yuan. A direct approach to sparse discriminant analysis in ultra-high dimensions. Biometrika, page asr066, 2012. [25] Enno Mammen, Alexandre B Tsybakov, et al. Smooth discrimination analysis. The Annals of Statistics, 27(6):1808–1829, 1999. [26] Sahand Negahban, Bin Yu, Martin J Wainwright, and Pradeep K Ravikumar. A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers. In Advances in Neural Information Processing Systems, pages 1348–1356, 2009. [27] Andrew Y. Ng and Michael I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. In Advances in Neural Information Processing Systems 14 (NIPS 2001), 2001. [28] Jun Shao, Yazhen Wang, Xinwei Deng, Sijian Wang, et al. Sparse linear discriminant analysis by thresholding for high dimensional data. The Annals of Statistics, 39(2):1241–1265, 2011. [29] Robert Tibshirani, Trevor Hastie, Balasubramanian Narasimhan, and Gilbert Chu. Diagnosis of multiple cancer types by shrunken centroids of gene expression. Proceedings of the National Academy of Sciences, 99(10):6567–6572, 2002. [30] Sijian Wang and Ji Zhu. Improved centroids estimation for the nearest shrunken centroid classifier. Bioinformatics, 23(8):972–979, 2007. [31] Tong Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annals of Statistics, pages 56–85, 2004. 9 | 2015 | 47 |
5,936 | Adaptive Online Learning Dylan J. Foster ∗ Cornell University Alexander Rakhlin † University of Pennsylvania Karthik Sridharan ∗ Cornell University Abstract We propose a general framework for studying adaptive regret bounds in the online learning setting, subsuming model selection and data-dependent bounds. Given a data- or model-dependent bound we ask, “Does there exist some algorithm achieving this bound?” We show that modifications to recently introduced sequential complexity measures can be used to answer this question by providing sufficient conditions under which adaptive rates can be achieved. In particular each adaptive rate induces a set of so-called offset complexity measures, and obtaining small upper bounds on these quantities is sufficient to demonstrate achievability. A cornerstone of our analysis technique is the use of one-sided tail inequalities to bound suprema of offset random processes. Our framework recovers and improves a wide variety of adaptive bounds including quantile bounds, second order data-dependent bounds, and small loss bounds. In addition we derive a new type of adaptive bound for online linear optimization based on the spectral norm, as well as a new online PAC-Bayes theorem. 1 Introduction Some of the recent progress on the theoretical foundations of online learning has been motivated by the parallel developments in the realm of statistical learning. In particular, this motivation has led to martingale extensions of empirical process theory, which were shown to be the “right” notions for online learnability. Two topics, however, have remained elusive thus far: obtaining data-dependent bounds and establishing model selection (or, oracle-type) inequalities for online learning problems. In this paper we develop new techniques for addressing both these questions. Oracle inequalities and model selection have been topics of intense research in statistics in the last two decades [1, 2, 3]. Given a sequence of models M1,M2,... whose union is M, one aims to derive a procedure that selects, given an i.i.d. sample of size n, an estimator ˆf from a model M ˆm that trades off bias and variance. Roughly speaking the desired oracle bound takes the form err( ˆf) ≤inf m inf f∈Mm err(f) + penn(m), where penn(m) is a penalty for the model m. Such oracle inequalities are attractive because they can be shown to hold even if the overall model M is too large. A central idea in the proofs of such statements (and an idea that will appear throughout the present paper) is that penn(m) should be “slightly larger” than the fluctuations of the empirical process for the model m. It is therefore not surprising that concentration inequalities—and particularly Talagrand’s celebrated inequality for the supremum of the empirical process—have played an important role in attaining oracle bounds. In order to select a good model in a data-driven manner, one establishes non-asymptotic data-dependent bounds on the fluctuations of an empirical process indexed by elements in each model [4]. ∗Deptartment of Computer Science †Deptartment of Statistics 1 Lifting the ideas of oracle inequalities and data-dependent bounds from statistical to online learning is not an obvious task. For one, there is no concentration inequality available, even for the simple case of sequential Rademacher complexity. (For the reader already familiar with this complexity: a change of the value of one Rademacher variable results in a change of the remaining path, and hence an attempt to use a version of a bounded difference inequality grossly fails). Luckily, as we show in this paper, the concentration machinery is not needed and one only requires a one-sided tail inequality. This realization is motivated by the recent work of [5, 6, 7]. At a high level, our approach will be to develop one-sided inequalities for the suprema of certain offset processes [7], where the offset is chosen to be “slightly larger” than the complexity of the corresponding model. We then show that these offset processes determine which data-dependent adaptive rates are achievable for online learning problems, drawing strong connections to the ideas of statistical learning described earlier. 1.1 Framework Let X be the set of observations, D the space of decisions, and Y the set of outcomes. Let ∆(S) denote the set of distributions on a set S. Let ` ∶D × Y →R be a loss function. The online learning framework is defined by the following process: For t = 1,...,n, Nature provides input instance xt ∈X; Learner selects prediction distribution qt ∈∆(D); Nature provides label yt ∈Y, while the learner draws prediction ˆyt ∼qt and suffers loss `(ˆyt,yt). Two important settings are supervised learning (Y ⊆R, D ⊆R) and online linear optimization (X = {0} is a singleton set, Y and D are balls in dual Banach spaces and `(ˆy,y) = ˆy,y). For a class F ⊆DX , we define the learner’s cumulative regret to F as n t=1 `(ˆyt,yt) −inf f∈F n t=1 `(f(xt),yt). A uniform regret bound Bn is achievable if there exists a randomized algorithm selecting ˆyt such that E n t=1 `(ˆyt,yt) −inf f∈F n t=1 `(f(xt),yt)≤Bn ∀x1∶n,y1∶n, (1) where a1∶n stands for {a1,...,an}. Achievable rates Bn depend on complexity of the function class F. For example, sequential Rademacher complexity of F is one of the tightest achievable uniform rates for a variety of loss functions [8, 7]. An adaptive regret bound has the form Bn(f;x1∶n,y1∶n) and is said to be achievable if there exists a randomized algorithm for selecting ˆyt such that E n t=1 `(ˆyt,yt) − n t=1 `(f(xt),yt)≤Bn(f;x1∶n,y1∶n) ∀x1∶n,y1∶n, ∀f ∈F. (2) We distinguish three types of adaptive bounds, according to whether Bn(f;x1∶n,y1∶n) depends only on f, only on (x1∶n,y1∶n), or on both quantities. Whenever Bn depends on f, an adaptive regret can be viewed as an oracle inequality which penalizes each f according to a measure of its complexity (e.g. the complexity of the smallest model to which it belongs). As in statistical learning, an oracle inequality (2) may be proved for certain functions Bn(f;x1∶n,y1∶n) even if a uniform bound (1) cannot hold for any nontrivial Bn. 1.2 Related Work The case when Bn(f;x1∶n,y1∶n) = Bn(x1∶n,y1∶n) does not depend on f has received most of the attention in the literature. The focus is on bounds that can be tighter for “nice sequences,” yet maintain near-optimal worst-case guarantees. An incomplete list of prior work includes [9, 10, 11, 12], couched in the setting of online linear/convex optimization, and [13] in the experts setting. A bound of type Bn(f) was studied in [14], which presented an algorithm that competes with all experts simultaneously, but with varied regret with respect to each of them depending on the quantile of the expert. Another bound of this type was given by [15], who consider online linear optimization with an unbounded set and provide oracle inequalities with an appropriately chosen function Bn(f). Finally, the third category of adaptive bounds are those that depend on both the hypothesis f ∈F and the data. The bounds that depend on the loss of the best function (so-called “small-loss” bounds, 2 [16, Sec. 2.4], [17, 13]) fall in this category trivially, since one may overbound the loss of the best function by the performance of f. We draw attention to the recent result of [18] who show an adaptive bound in terms of both the loss of comparator and the KL divergence between the comparator and some pre-fixed prior distribution over experts. An MDL-style bound in terms of the variance of the loss of the comparator (under the distribution induced by the algorithm) was recently given in [19]. Our study was also partly inspired by Cover [20] who characterized necessary and sufficient conditions for achievable bounds in prediction of binary sequences. The methods in [20], however, rely on the structure of the binary prediction problem and do not readily generalize to other settings. The framework we propose recovers the vast majority of known adaptive rates in literature, including variance bounds, quantile bounds, localization-based bounds, and fast rates for small losses. It should be noted that while existing literature on adaptive online learning has focused on simple hypothesis classes such as finite experts and finite-dimensional p-norm balls, our results extend to general hypothesis classes, including large nonparametric ones discussed in [7]. 2 Adaptive Rates and Achievability: General Setup The first step in building a general theory for adaptive online learning is to identify what adaptive regret bounds are possible to achieve. Recall that an adaptive regret bound of Bn ∶F × X n × Yn →R is said to be achievable if there exists an online learning algorithm such that, (2) holds. In the rest of this work, we use the notation ...n t=1 to denote the interleaved application of the operators inside the brackets, repeated over t = 1,...,n rounds (see [21]). Achievability of an adaptive rate can be formalized by the following minimax quantity. Definition 1. Given an adaptive rate Bn we define the offset minimax value: An(F, Bn) sup xt∈X inf qt∈∆(D) sup yt∈Y E ˆyt∼qt n t=1 n t=1 `(ˆyt, yt) −inf f∈F n t=1 `(f(xt), yt) + Bn(f; x1∶n, y1∶n). An(F,Bn) quantifies how ∑n t=1 `(ˆyt,yt) −inff∈F {∑n t=1 `(f(xt),yt) + Bn(f;x1∶n,y1∶n)} behaves when the optimal learning algorithm that minimizes this difference is used against Nature trying to maximize it. Directly from this definition, An adaptive rate Bn is achievable if and only if An(F,Bn) ≤0. If Bn is a uniform rate, i.e., Bn(f;x1∶n,y1∶n) = Bn, achievability reduces to the minimax analysis explored in [8]. The uniform rate Bn is achievable if and only if Bn ≥Vn(F), where Vn(F) is the minimax value of the online learning game. We now focus on understanding the minimax value An(F,Bn) for general adaptive rates. We first show that the minimax value is bounded by an offset version of the sequential Rademacher complexity studied in [8]. The symmetrization Lemma 1 below provides us with the first step towards a probabilistic analysis of achievable rates. Before stating the lemma, we need to define the notion of a tree and the notion of sequential Rademacher complexity. Given a set Z, a Z-valued tree z of depth n is a sequence (zt)n t=1 of functions zt ∶{±1}t−1 →Z. One may view z as a complete binary tree decorated by elements of Z. Let ✏= (✏t)n t=1 be a sequence of independent Rademacher random variables. Then (zt(✏)) may be viewed as a predictable process with respect to the filtration St = σ(✏1,...,✏t). For a tree z, the sequential Rademacher complexity of a function class G ⊆RZ on z is defined as Rn(G,z) E✏sup g∈G n t=1 ✏tg(zt(✏)) and Rn(G) sup z Rn(G,z) . Lemma 1. For any lower semi-continuous loss `, and any adaptive rate Bn that only depends on outcomes (i.e. Bn(f;x1∶n,y1∶n) = Bn(y1∶n)), we have that An ≤sup x,y E✏sup f∈F 2 n t=1 ✏t`(f(xt(✏)),yt(✏))−Bn(y1∶n(✏)). (3) 3 Further, for any general adaptive rate Bn, An ≤sup x,y,y′ E✏sup f∈F 2 n t=1 ✏t`(f(xt(✏)),yt(✏)) −Bn(f;x1∶n(✏),y′ 2∶n+1(✏)). (4) Finally, if one considers the supervised learning problem where F ∶X →R, Y ⊂R and ` ∶R×R →R is a loss that is convex and L-Lipschitz in its first argument, then for any adaptive rate Bn, An ≤sup x,y E✏sup f∈F 2L n t=1 ✏tf(xt(✏)) −Bn(f;x1∶n(✏),y1∶n(✏)). (5) The above lemma tells us that to check whether an adaptive rate is achievable, it is sufficient to check that the corresponding adaptive sequential complexity measures are non-positive. We remark that if the above complexities are bounded by some positive quantity of a smaller order, one can form a new achievable rate B′ n by adding the positive quantity to Bn. 3 Probabilistic Tools As mentioned in the introduction, our technique rests on certain one-sided probabilistic inequalities. We now state the first building block: a rather straightforward maximal inequality. Proposition 2. Let I = {1,...,N}, N ≤∞, be a set of indices and let (Xi)i∈I be a sequence of random variables satisfying the following tail condition: for any ⌧> 0, P(Xi −Bi > ⌧) ≤C1 exp−⌧2(2σ2 i )+ C2 exp(−⌧si) (6) for some positive sequence (Bi), nonnegative sequence (σi) and nonnegative sequence (si) of numbers, and for constants C1,C2 ≥0. Then for any ¯σ ≤σ1, ¯s ≥s1, and ✓i = maxσi Bi 2log(σi¯σ) + 4log(i),(Bisi)−1 log i2(¯ssi)+ 1, it holds that Esup i∈I {Xi −Bi✓i} ≤3C1¯σ + 2C2(¯s)−1. (7) We remark that Bi need not be the expected value of Xi, as we are not interested in two-sided deviations around the mean. One of the approaches to obtaining oracle-type inequalities is to split a large class into smaller ones according to a “complexity radius” and control a certain stochastic process separately on each subset (also known as the peeling technique). In the applications below, Xi will often stand for the (random) supremum of this process on subset i, and Bi will be an upper bound on its typical size. Given deviation bounds for Xi above Bi, the dilated size Bi✓i then allows one to pass to maximal inequalities (7) and thus verify achievability in Lemma 1. The same strategy works for obtaining data-dependent bounds, where we first prove tail bounds for the given size of the data-dependent quantity, then appeal to (7). A simple yet powerful example for the control of the supremum of a stochastic process is an inequality due to Pinelis [22] for the norm (which is a supremum over the dual ball) of a martingale in a 2-smooth Banach space. Here we state a version of this result that can be found in [23, Appendix A]. Lemma 3. Let Z be a unit ball in a separable (2,D)-smooth Banach space H. For any Z-valued tree z, and any n > ⌧4D2 P n t=1 ✏tzt(✏)≥⌧≤2 exp − ⌧2 8D2n When the class of functions is not linear, we may no longer appeal to the above lemma. Instead, we make use of a result from [24] that extends Lemma 3 at a price of a poly-logarithmic factor. Before stating this lemma, we briefly define the relevant complexity measures (see [24] for more details). First, a set V of R-valued trees is called an ↵-cover of G ⊆RZ on z with respect to `p if ∀g ∈G,∀✏∈{±1}n,∃v ∈V s.t. n t=1 (g(zt(✏)) −vt(✏))p ≤n↵p. 4 The size of the smallest ↵-cover is denoted by Np(G,↵,z), and Np(G,↵,n) supz Np(G,↵,z). The set V is an ↵-cover of G on z with respect to `∞if ∀g ∈G,∀✏∈{±1},∃v ∈V s.t. g(zt(✏)) −vt(✏)≤↵ ∀t ∈[n]. We let N∞(G,↵,z) be the smallest such cover and set N∞(G,↵,n) = supz N∞(G,↵,z). Lemma 4 ([24]). Let G ⊆[−1,1]Z. Suppose Rn(G)n →0 with n →∞and that the following mild assumptions hold: Rn(G) ≥1n, N∞(G,2−1,n) ≥4, and there exists a constant Γ such that Γ ≥∑∞ j=1 N∞(G,2−j,n)−1. Then for any ✓> 12n, for any Z-valued tree z of depth n, P sup g∈G n t=1 ✏tg(zt(✏))> 81 + ✓ 8nlog3(en2)⋅Rn(G) ≤P sup g∈G n t=1 ✏tg(zt(✏))> n inf ↵>0 4↵+ 6✓ 1 ↵ log N∞(G,δ,n)dδ≤2Γe−n✓2 4 . The above lemma yields a one-sided control on the size of the supremum of the sequential Rademacher process, as required for our oracle-type inequalities. Next, we turn our attention to an offset Rademacher process, where the supremum is taken over a collection of negative-mean random variables. The behavior of this offset process was shown to govern the optimal rates of convergence for online nonparametric regression [7]. Such a one-sided control of the supremum will be necessary for some of the data-dependent upper bounds we develop. Lemma 5. Let z be a Z-valued tree of depth n, and let G ⊆RZ. For any γ ≥1n and ↵> 0, P sup g∈G n t=1 ✏tg(zt(✏)) −2↵g2(zt(✏))−log N2(G, γ, z) ↵ −12 √ 2 γ 1n n log N2(G, δ, z)dδ −1 > ⌧ ≤Γ exp −⌧2 2σ2 + exp −↵⌧ 2 , where Γ ≥∑ log2(2nγ) j=1 N2(G,2−jγ,z)−2 and σ = 12∫ γ 1 n nlog N2(G,δ,z)dδ. We observe that the probability of deviation has both subgaussian and subexponential components. Using the above result and Proposition 2 leads to useful bounds on the quantities in Lemma 1 for specific types of adaptive rates. Given a tree z, we obtain a bound on the expected size of the sequential Rademacher process when we subtract off the data-dependent `2-norm of the function on the tree z, adjusted by logarithmic terms. Corollary 6. Suppose G ⊆[−1,1]Z, and let z be any Z-valued tree of depth n. Assume log N2(G,δ,n) ≤δ−p for some p < 2. Then E sup g∈G,γ n t=1 ✏tg(zt(✏)) −4 2(log n) log N2(G, γ2, z) n t=1 g2(zt(✏)) + 1 −24 √ 2 log n γ 1n n log N2(G, δ, z)dδ ≤7 + 2 log n . The next corollary yields slightly faster rates than Corollary 6 when G< ∞. Corollary 7. Suppose G ⊆[−1,1]Z with G= N, and let z be any Z-valued tree of depth n. Then E sup g∈G n t=1 ✏tg(zt(✏)) −2 loglog N n t=1 g2(z(✏)) + e 32log N n t=1 g2(z(✏)) + e ≤1. 4 Achievable Bounds In this section we use Lemma 1 along with the probabilistic tools from the previous section to obtain an array of achievable adaptive bounds for various online learning problems. We subdivide the section into one subsection for each category of adaptive bound described in Section 1.1. 5 4.1 Adapting to Data Here we consider adaptive rates of the form Bn(x1∶n,y1∶n), uniform over f ∈F. We show the power of the developed tools on the following example. Example 4.1 (Online Linear Optimization in Rd). Consider the problem of online linear optimization where F = {f ∈Rd ∶f2 ≤1}, Y = {y ∶y2 ≤1}, X = {0}, and `(ˆy,y) = ˆy,y. The following adaptive rate is achievable: Bn(y1∶n) = 16 √ d log(n) ∑n t=1yty t 12 σ + 16 √ d log(n), where ⋅σ is the spectral norm. Let us deduce this result from Corollary 6. First, observe that ∑n t=1yty t 12 σ = sup f∶f2≤1 ∑n t=1yty t 12 f= sup f∶f2≤1 f ∑n t=1yty t f = sup f∈F ∑n t=1`2(f, yt). The linear function class F can be covered point-wise at any scale δ with (3δ)d balls and thus N(` ○F,1(2n),z) ≤(6n)d for any Y-valued tree z. We apply Corollary 6 with γ = 1n and the integral term in the corollary vanishes, yielding the claimed statement. 4.2 Model Adaptation In this subsection we focus on achievable rates for oracle inequalities and model selection, but without dependence on data. The form of the rate is therefore Bn(f). Assume we have a class F = R≥1 F(R), with the property that F(R) ⊆F(R′) for any R ≤R′. If we are told by an oracle that regret will be measured with respect to those hypotheses f ∈F with R(f) inf{R ∶ f ∈F(R)} ≤R∗, then using the minimax algorithm one can guarantee a regret bound of at most the sequential Rademacher complexity Rn(F(R∗)). On the other hand, given the optimality of the sequential Rademacher complexity for online learning problems for commonly encountered losses, we can argue that for any f ∈F chosen in hindsight, one cannot expect a regret better than order Rn(F(R(f))). In this section we show that simultaneously for all f ∈F, one can attain an adaptive upper bound of O Rn(F(R(f))) log (Rn(F(R(f))))log32 n. That is, we may predict as if we knew the optimal radius, at the price of a logarithmic factor. This is the price of adaptation. Corollary 8. For any class of predictors F with F(1) non-empty, if one considers the supervised learning problem with 1-Lipschitz loss `, the following rate is achievable: Bn(f) = log32 n K1Rn(F(2R(f))) 1 + log log(2R(f)) ⋅Rn(F(2R(f))) Rn(F(1)) + K2ΓRn(F(1)) , for absolute constants K1,K2, and Γ defined in Lemma 4. In fact, this statement is true more generally with F(2R(f)) replaced by ` ○F(2R(f)). It is tempting to attempt to prove the above statement with the exponential weights algorithm running as an aggregation procedure over the solutions for each R. In general, this approach will fail for two reasons. First, if function values grow with R, the exponential weights bound will scale linearly with this value. Second, an experts bound yields only a slower √n rate. As a special case of the above lemma, we obtain an online PAC-Bayesian theorem. We postpone this example to the next sub-section where we get a data-dependent version of this result. We now provide a bound for online linear optimization in 2-smooth Banach spaces that automatically adapts to the norm of the comparator. To prove it, we use the concentration bound from [22] (Lemma 3) within the proof of the above corollary to remove the extra logarithmic factors. Example 4.2 (Unconstrained Linear Optimization). Consider linear optimization with Y being the unit ball of some reflexive Banach space with norm ⋅∗. Let F = D be the dual space and the loss `(ˆy,y) = ˆy,y(where we are using ⋅,⋅to represent the linear functional in the first argument to the second argument). Define F(R) = {f f≤R} where ⋅is the norm dual to ⋅∗. If the unit ball of Y is (2,D)-smooth, then the following rate is achievable for all f with f≥1: B(f) = D√n8f1 + log(2f) + log log(2f)+ 12. For the case of a Hilbert space, the above bound was achieved by [15]. 6 4.3 Adapting to Data and Model Simultaneously We now study achievable bounds that perform online model selection in a data-adaptive way. Of specific interest is our online optimistic PAC-Bayesian bound. This bound should be compared to [18, 19], with the reader noting that it is independent of the number of experts, is algorithmindependent, and depends quadratically on the expected loss of the expert we compare against. Example 4.3 (Generalized Predictable Sequences (Supervised Learning)). Consider an online supervised learning problem with a convex 1-Lipschitz loss. Let (Mt)t≥1 be any predictable sequence that the learner can compute at round t based on information provided so far, including xt (One can think of the predictable sequence Mt as a prior guess for the hypothesis we would compare with in hindsight). Then the following adaptive rate is achievable: Bn(f; x1∶n) = inf γ K1 log n ⋅log N2(F, γ2, n) ⋅ n t=1 (f(xt) −Mt)2 + 1 +K2 log n γ 1n n log N2(F, δ, n)dδ + 2 log n+7 , for constants K1 = 4 √ 2,K2 = 24 √ 2 from Corollary 6. The achievability is a direct consequence of Eq. (5) in Lemma 1, followed by Corollary 6 (one can include any predictable sequence in the Rademacher average part because ∑t Mt✏t is zero mean). Particularly, if we assume that the sequential covering of class F grows as log N2(F,✏,n) ≤✏−p for some p < 2, we get that Bn(f) = ˜O ∑n t=1 (f(xt) −Mt)2 + 1 1−p 2 √n p2. As p gets closer to 0, we get full adaptivity and replace n by ∑n t=1 (f(xt) −Mt)2 + 1. On the other hand, as p gets closer to 2 (i.e. more complex function classes), we do not adapt and get a uniform bound in terms of n. For p ∈(0,2), we attain a natural interpolation. Example 4.4 (Regret to Fixed Vs Regret to Best (Supervised Learning)). Consider an online supervised learning problem with a convex 1-Lipschitz loss and let F= N. Let f ∈F be a fixed expert chosen in advance. The following bound is achievable: Bn(f, x1∶n) = 4 loglog N n t=1 (f(xt) −f (xt))2 + e 32log N n t=1 (f(xt) −f (xt))2 + e+ 2. In particular, against f we have Bn(f ,x1∶n) = O(1), and against an arbitrary expert we have Bn(f,x1∶n) = O√nlog N(log (n ⋅log N)). This bound follows from Eq. (5) in Lemma 1 followed by Corollary 7. This extends the study of [25] to supervised learning and general class of experts F. Example 4.5 (Optimistic PAC-Bayes). Assume that we have a countable set of experts and that the loss for each expert on any round is non-negative and bounded by 1. The function class F is the set of all distributions over these experts, and X = {0}. This setting can be formulated as online linear optimization where the loss of mixture f over experts, given instance y, is f,y, the expected loss under the mixture. The following adaptive bound is achievable: Bn(f; y1∶n) = 50 (KL(f⇡) + log(n)) n t=1 Ei∼fei, yt2 + 50 (KL(f⇡) + log(n)) + 10. This adaptive bound is an online PAC-Bayesian bound. The rate adapts not only to the KL divergence of f with fixed prior ⇡but also replaces n with ∑n t=1 Ei∼fei,yt2. Note that we have ∑n t=1 Ei∼fei,yt2 ≤∑n t=1 f,yt, yielding the small-loss type bound described earlier. This is an improvement over the bound in [18] in that the bound is independent of number of experts, and so holds even for countably infinite sets of experts. The KL term in our bound may be compared to the MDL-style term in the bound of [19]. If we have a large (but finite) number of experts and take ⇡to be uniform, the above bound provides an improvement over both [14]1 and [18]. Evaluating the above bound with a distribution f that places all its weight on any one expert appears to address the open question posed by [13] of obtaining algorithm-independent oracle-type variance bounds for experts. The proof of achievability of the above rate is shown in the appendix because it requires a slight variation on the symmetrization lemma specific to the problem. 1See [18] for a comparison of KL-based bounds and quantile bounds. 7 5 Relaxations for Adaptive Learning To design algorithms for achievable rates, we extend the framework of online relaxations from [26]. A relaxation Reln ∶n t=0 X t × Yt →R that satisfies the initial condition, Reln(x1∶n, y1∶n) ≥−inf f∈F n t=1 `(f(xt), yt) + Bn(f; x1∶n, y1∶n), (8) and the recursive condition, Reln(x1∶t−1, y1∶t−1) ≥sup xt∈X inf qt∈∆(D) sup yt∈Y Eˆy∼qt[`(ˆyt, yt) + Reln(x1∶t, y1∶t)], (9) is said to be admissible for the adaptive rate Bn. The relaxation’s corresponding strategy is ˆqt = arg minqt∈∆(D) supyt∈Y Eˆy∼qt[`(ˆyt,yt) + Reln(x1∶t,y1∶t)], which enjoys the adaptive bound n t=1 `(ˆyt, yt) −inf f∈F n t=1 `(f(xt), yt) + Bn(f; x1∶n, y1∶n)≤Reln(⋅) ∀x1∶n, y1∶n. It follows immediately that the strategy achieves the rate Bn(f;x1∶n,y1∶n) + Reln(⋅). Our goal is then to find relaxations for which the strategy is computationally tractable and Reln(⋅) ≤0 or at least has smaller order than Bn. Similar to [26], conditional versions of the offset minimax values An yield admissible relaxations, but solving these relaxations may not be computationally tractable. Example 5.1 (Online PAC-Bayes). Consider the experts setting in Example 4.5 with: Bn(f) = 3 2nmax{KL(f ⇡),1} + 4√n. Let Ri = 2i−1 and let qR t (y) denote the exponential weights distribution with learning rate Rn: qR(y1∶t)k ∝⇡k exp− Rn(∑t s=1 yt)k. The following is an admissible relaxation achieving Bn: Reln(y1∶t) = inf λ>0 1 λ log i exp−λ t s=1 qRi(y1∶s−1), ys+ √ nRi+ 2λ(n −t). Let q t be a distribution with (q t )i ∝exp−1 √n∑t−1 s=1qRi(y1∶s−1),ys−√nRi. We predict by drawing i according to q t , then drawing an expert according to qRi(y1∶t−1). While in general the problem of obtaining an efficient adaptive relaxation might be hard, one can ask the question, “If and efficient relaxation RelR n is available for each F(R), can one obtain an adaptive model selection algorithm for all of F?”. To this end for supervised learning problem with convex Lipschitz loss we delineate a meta approach which utilizes existing relaxations for each F(R). Lemma 9. Let qR t (y1,...,yt−1) be the randomized strategy corresponding to RelR n , obtained after observing outcomes y1,...,yt−1, and let ✓∶R →R be nonnegative. The following relaxation is admissible for the rate Bn(R) = RelR n (⋅)✓(RelR n (⋅)): Adan(x1∶t, y1∶t) = sup x,y,y′ E✏t+1∶n sup R≥1 RelR n (x1∶t, y1∶t) −RelR n (⋅)✓(RelR n (⋅)) + 2 n s=t+1 ✏sEˆys∼qR s (y1∶t,y′ t+1∶s−1(✏))`(ˆys, ys(✏)). Playing according to the strategy for Adan will guarantee a regret bound of Bn(R) + Adan(⋅), and Adan(⋅) can be bounded using Proposition 2 when the form of ✓is as in that proposition. We remark that the above strategy is not necessarily obtained by running a high-level experts algorithm over the discretized values of R. It is an interesting question to determine the cases when such a strategy is optimal. More generally, when the adaptive rate Bn depends on data, it is not possible to obtain the rates we show non-constructively in this paper using the exponential weights algorithm with meta-experts as the required weighting over experts would be data dependent (and hence is not a prior over experts). Further, the bounds from exponential-weights-type algorithms are akin to having sub-exponential tails in Proposition 2, but for many problems we may have sub-gaussian tails. Obtaining computationally efficient methods from the proposed framework is an interesting research direction. Proposition 2 provides a useful non-constructive tool to establish achievable adaptive bounds, and a natural question to ask is if one can obtain a constructive counterpart for the proposition. 8 References [1] Lucien Birg´e, Pascal Massart, et al. Minimum contrast estimators on sieves: exponential bounds and rates of convergence. Bernoulli, 4(3):329–375, 1998. [2] G´abor Lugosi and Andrew B Nobel. Adaptive model selection using empirical complexities. Annals of Statistics, pages 1830–1864, 1999. [3] Peter L. Bartlett, St´ephane Boucheron, and G´abor Lugosi. Model selection and error estimation. Machine Learning, 48(1-3):85–113, 2002. [4] Pascal Massart. Concentration inequalities and model selection, volume 10. Springer, 2007. [5] Shahar Mendelson. Learning without Concentration. In Conference on Learning Theory, 2014. [6] Tengyuan Liang, Alexander Rakhlin, and Karthik Sridharan. Learning with square loss: Localization through offset rademacher complexity. Proceedings of The 28th Conference on Learning Theory, 2015. [7] Alexander Rakhlin and Karthik Sridharan. Online nonparametric regression. Proceedings of The 27th Conference on Learning Theory, 2014. [8] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Random averages, combinatorial parameters, and learnability. In Advances in Neural Information Processing Systems 23. 2010. [9] Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded by variation in costs. Machine learning, 80(2):165–188, 2010. [10] Chao-Kai Chiang, Tianbao Yang, Chia-Jung Lee, Mehrdad Mahdavi, Chi-Jen Lu, Rong Jin, and Shenghuo Zhu. Online optimization with gradual variations. In COLT, 2012. [11] Alexander Rakhlin and Karthik Sridharan. Online learning with predictable sequences. In Proceedings of the 26th Annual Conference on Learning Theory (COLT), 2013. [12] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011. [13] Nicolo Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order bounds for prediction with expert advice. Machine Learning, 66(2-3):321–352, 2007. [14] Kamalika Chaudhuri, Yoav Freund, and Daniel J Hsu. A parameter-free hedging algorithm. In Advances in neural information processing systems, pages 297–305, 2009. [15] H. Brendan McMahan and Francesco Orabona. Unconstrained online linear learning in hilbert spaces: Minimax algorithms and normal approximations. Proceedings of The 27th Conference on Learning Theory, 2014. [16] Nicolo Cesa-Bianchi and G´abor Lugosi. Prediction, Learning, and Games. Cambridge University Press, 2006. [17] Nathan Srebro, Karthik Sridharan, and Ambuj Tewari. Smoothness, low noise and fast rates. In Advances in neural information processing systems, pages 2199–2207, 2010. [18] Haipeng Luo and Robert E. Schapire. Achieving all with no parameters: Adaptive normalhedge. CoRR, abs/1502.05934, 2015. [19] Wouter M. Koolen and Tim van Erven. Second-order quantile methods for experts and combinatorial games. In Proceedings of the 28th Annual Conference on Learning Theory (COLT), pages 1155–1175, 2015. [20] Thomas M. Cover. Behavior of sequential predictors of binary sequences. In in Trans. 4th Prague Conference on Information Theory, Statistical Decision Functions, Random Processes, pages 263–272. Publishing House of the Czechoslovak Academy of Sciences, 1967. [21] Alexander Rakhlin and Karthik Sridharan. Statistical learning theory and sequential prediction, 2012. Available at http://stat.wharton.upenn.edu/˜rakhlin/book_draft.pdf. [22] Iosif Pinelis. Optimum bounds for the distributions of martingales in banach spaces. The Annals of Probability, 22(4):1679–1706, 10 1994. [23] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Online learning: Beyond regret. arXiv preprint arXiv:1011.3168, 2010. [24] Alexander Rakhlin, Karthik Sridharan, and Ambuj Tewari. Sequential complexities and uniform martingale laws of large numbers. Probability Theory and Related Fields, 2014. [25] Eyal Even-Dar, Michael Kearns, Yishay Mansour, and Jennifer Wortman. Regret to the best vs. regret to the average. Machine Learning, 72(1-2):21–37, 2008. [26] Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Relax and randomize: From value to algorithms. Advances in Neural Information Processing Systems 25, pages 2150–2158, 2012. 9 | 2015 | 48 |
5,937 | Robust Regression via Hard Thresholding Kush Bhatia†, Prateek Jain†, and Purushottam Kar‡∗ †Microsoft Research, India ‡Indian Institute of Technology Kanpur, India {t-kushb,prajain}@microsoft.com, purushot@cse.iitk.ac.in Abstract We study the problem of Robust Least Squares Regression (RLSR) where several response variables can be adversarially corrupted. More specifically, for a data matrix X ∈Rp×n and an underlying model w∗, the response vector is generated as y = XT w∗+ b where b ∈Rn is the corruption vector supported over at most C · n coordinates. Existing exact recovery results for RLSR focus solely on L1penalty based convex formulations and impose relatively strict model assumptions such as requiring the corruptions b to be selected independently of X. In this work, we study a simple hard-thresholding algorithm called TORRENT which, under mild conditions on X, can recover w∗exactly even if b corrupts the response variables in an adversarial manner, i.e. both the support and entries of b are selected adversarially after observing X and w∗. Our results hold under deterministic assumptions which are satisfied if X is sampled from any sub-Gaussian distribution. Finally unlike existing results that apply only to a fixed w∗, generated independently of X, our results are universal and hold for any w∗∈Rp. Next, we propose gradient descent-based extensions of TORRENT that can scale efficiently to large scale problems, such as high dimensional sparse recovery. and prove similar recovery guarantees for these extensions. Empirically we find TORRENT, and more so its extensions, offering significantly faster recovery than the state-of-the-art L1 solvers. For instance, even on moderate-sized datasets (with p = 50K) with around 40% corrupted responses, a variant of our proposed method called TORRENT-HYB is more than 20× faster than the best L1 solver. “If among these errors are some which appear too large to be admissible, then those equations which produced these errors will be rejected, as coming from too faulty experiments, and the unknowns will be determined by means of the other equations, which will then give much smaller errors.” A. M. Legendre, On the Method of Least Squares. 1805. 1 Introduction Robust Least Squares Regression (RLSR) addresses the problem of learning a reliable set of regression coefficients in the presence of several arbitrary corruptions in the response vector. Owing to the wide-applicability of regression, RLSR features as a critical component of several important realworld applications in a variety of domains such as signal processing [1], economics [2], computer vision [3, 4], and astronomy [2]. Given a data matrix X = [x1, . . . , xn] with n data points in Rp and the corresponding response vector y ∈Rn, the goal of RLSR is to learn a ˆw such that, ( ˆw, ˆS) = arg min w∈Rp S⊂[n]:|S|≥(1−β)·n X i∈S (yi −xT i w)2, (1) ∗This work was done while P.K. was a postdoctoral researcher at Microsoft Research India. 1 That is, we wish to simultaneously determine the set of corruption free points ˆS and also estimate the best model parameters over the set of clean points. However, the optimization problem given above is non-convex (jointly in w and S) in general and might not directly admit efficient solutions. Indeed there exist reformulations of this problem that are known to be NP-hard to optimize [1]. To address this problem, most existing methods with provable guarantees assume that the observations are obtained from some generative model. A commonly adopted model is the following y = XT w∗+ b, (2) where w∗∈Rp is the true model vector that we wish to estimate and b ∈Rn is the corruption vector that can have arbitrary values. A common assumption is that the corruption vector is sparsely supported i.e. ∥b∥0 ≤α · n for some α > 0. Recently, [4] and [5] obtained a surprising result which shows that one can recover w∗exactly even when α ≲1, i.e., when almost all the points are corrupted, by solving an L1-penalty based convex optimization problem: minw,b ∥w∥1 + λ ∥b∥1, s.t., X⊤w + b = y. However, these results require the corruption vector b to be selected oblivious of X and w∗. Moreover, the results impose severe restrictions on the data distribution, requiring that the data be either sampled from an isotropic Gaussian ensemble [4], or row-sampled from an incoherent orthogonal matrix [5]. Finally, these results hold only for a fixed w∗and are not universal in general. In contrast, [6] studied RLSR with less stringent assumptions, allowing arbitrary corruptions in response variables as well as in the data matrix X, and proposed a trimmed inner product based algorithm for the problem. However, their recovery guarantees are significantly weaker. Firstly, they are able to recover w∗only upto an additive error α√p (or α√s if w∗is s-sparse). Hence, they require α ≤1/√p just to claim a non-trivial bound. Note that this amounts to being able to tolerate only a vanishing fraction of corruptions. More importantly, even with n →∞and extremely small α they are unable to guarantee exact recovery of w∗. A similar result was obtained by [7], albeit using a sub-sampling based algorithm with stronger assumptions on b. In this paper, we focus on a simple and natural thresholding based algorithm for RLSR. At a high level, at each step t, our algorithm alternately estimates an active set St of “clean” points and then updates the model to obtain wt+1 by minimizing the least squares error on the active set. This intuitive algorithm seems to embody a long standing heuristic first proposed by Legendre [8] over two centuries ago (see introductory quotation in this paper) that has been adopted in later literature [9, 10] as well. However, to the best of our knowledge, this technique has never been rigorously analyzed before in non-asymptotic settings, despite its appealing simplicity. Our Contributions: The main contribution of this paper is an exact recovery guarantee for the thresholding algorithm mentioned above that we refer to as TORRENT-FC (see Algorithm 1). We provide our guarantees in the model given in 2 where the corruptions b are selected adversarially but restricted to have at most α · n non-zero entries where α is a global constant dependent only on X1. Under deterministic conditions on X, namely the subset strong convexity (SSC) and smoothness (SSS) properties (see Definition 1), we guarantee that TORRENT-FC converges at a geometric rate and recovers w∗exactly. We further show that these properties (SSC and SSS) are satisfied w.h.p. if a) the data X is sampled from a sub-Gaussian distribution and, b) n ≥p log p. We would like to stress three key advantages of our result over the results of [4, 5]: a) we allow b to be adversarial, i.e., both support and values of b to be selected adversarially based on X and w∗, b) we make assumptions on data that are natural, as well as significantly less restrictive than what existing methods make, and c) our analysis admits universal guarantees, i.e., holds for any w∗. We would also like to stress that while hard-thresholding based methods have been studied rigorously for the sparse-recovery problem [11, 12], hard-thresholding has not been studied formally for the robust regression problem. [13] study soft-thresholding approaches to the robust regression problem but without any formal guarantees. Moreover, the two problems are completely different and hence techniques from sparse-recovery analysis do not extend to robust regression. 1Note that for an adaptive adversary, as is the case in our work, recovery cannot be guaranteed for α ≥1/2 since the adversary can introduce corruptions as bi = x⊤ i ( ew −w∗) for an adversarially chosen model ew. This would make it impossible for any algorithm to distinguish between w∗and ew thus making recovery impossible. 2 Despite its simplicity, TORRENT-FC does not scale very well to datasets with large p as it solves least squares problems at each iteration. We address this issue by designing a gradient descent based algorithm (TORRENT-GD), and a hybrid algorithm (TORRENT-Hyb), both of which enjoy a geometric rate of convergence and can recover w∗under the model assumptions mentioned above. We also propose extensions of TORRENT for the RLSR problem in the sparse regression setting where p ≫n but ∥w∗∥0 = s∗≪p. Our algorithm TORRENT-HD is based on TORRENT-FC but uses the Iterative Hard Thresholding (IHT) algorithm, a popular algorithm for sparse regression. As before, we show that TORRENT-HD also converges geometrically to w∗if a) the corruption index α is less than some constant C, b) X is sampled from a sub-Gaussian distribution and, c) n ≥s∗log p. Finally, we experimentally evaluate existing L1-based algorithms and our hard thresholding-based algorithms. The results demonstrate that our proposed algorithms (TORRENT-(FC/GD/HYB)) can be significantly faster than the best L1 solvers, exhibit better recovery properties, as well as be more robust to dense white noise. For instance, on a problem with 50K dimensions and 40% corruption, TORRENT-HYB was found to be 20× faster than L1 solvers, as well as achieve lower error rates. 2 Problem Formulation Given a set of data points X = [x1, x2, . . . , xn], where xi ∈Rp and the corresponding response vector y ∈Rn, the goal is to recover a parameter vector w∗which solves the RLSR problem (1). We assume that the response vector y is generated using the following model: y = y∗+ b + ε, where y∗= X⊤w∗. Hence, in the above model, (1) reduces to estimating w∗. We allow the model w∗representing the regressor, to be chosen in an adaptive manner after the data features have been generated. The above model allows two kinds of perturbations to yi – dense but bounded noise εi (e.g. white noise εi ∼N(0, σ2), σ ≥0), as well as potentially unbounded corruptions bi – to be introduced by an adversary. The only requirement we enforce is that the gross corruptions be sparse. ε shall represent the dense noise vector, for example ε ∼N(0, σ2·In×n), and b, the corruption vector such that ∥b∥0 ≤α·n for some corruption index α > 0. We shall use the notation S∗= supp(b) ⊆[n] to denote the set of “clean” points, i.e. points that have not faced unbounded corruptions. We consider adaptive adversaries that are able to view the generated data points xi, as well as the clean responses y∗ i and dense noise values εi before deciding which locations to corrupt and by what amount. We denote the unit sphere in p dimensions using Sp−1. For any γ ∈(0, 1], we let Sγ = {S ⊂[n] : |S| = γ · n} denote the set of all subsets of size γ · n. For any set S, we let XS := [xi]i∈S ∈Rp×|S| denote the matrix whose columns are composed of points in that set. Also, for any vector v ∈Rn we use the notation vS to denote the |S|-dimensional vector consisting of those components that are in S. We use λmin(X) and λmax(X) to denote, respectively, the smallest and largest eigenvalues of a square symmetric matrix X. We now introduce two properties, namely, Subset Strong Convexity and Subset Strong Smoothness, which are key to our analyses. Definition 1 (SSC and SSS Properties). A matrix X ∈Rp×n satisfies the Subset Strong Convexity Property (resp. Subset Strong Smoothness Property) at level γ with strong convexity constant λγ (resp. strong smoothness constant Λγ) if the following holds: λγ ≤min S∈Sγλmin(XSX⊤ S ) ≤max S∈Sγλmax(XSX⊤ S ) ≤Λγ. Remark 1. We note that the uniformity enforced in the definitions of the SSC and SSS properties is not for the sake of convenience but rather a necessity. Indeed, a uniform bound is required in face of an adversary which can perform corruptions after data and response variables have been generated, and choose to corrupt precisely that set of points where the SSC and SSS parameters are the worst. 3 TORRENT: Thresholding Operator-based Robust Regression Method We now present TORRENT, a Thresholding Operator-based Robust RegrEssioN meThod for performing robust regression at scale. Key to our algorithms is the Hard Thresholding Operator which we define below. 3 Algorithm 1 TORRENT: Thresholding Operatorbased Robust RegrEssioN meThod Input: Training data {xi, yi} , i = 1 . . . n, step length η, thresholding parameter β, tolerance ϵ 1: w0 ←0, S0 = [n], t ←0, r0 ←y 2: while
rt St
2 > ϵ do 3: wt+1 ←UPDATE(wt, St, η, rt, St−1) 4: rt+1 i ← yi − wt+1, xi 5: St+1 ←HT(rt+1, (1 −β)n) 6: t ←t + 1 7: end while 8: return wt Algorithm 2 UPDATE TORRENT-FC Input: Current model w, current active set S 1: return arg min w X i∈S (yi −⟨w, xi⟩)2 Algorithm 3 UPDATE TORRENT-GD Input: Current model w, current active set S, step size η 1: g ←XS(X⊤ S w −yS) 2: return w −η · g Algorithm 4 UPDATE TORRENT-HYB Input: Current model w, current active set S, step size η, current residuals r, previous active set S′ 1: // Use the GD update if the active set S is changing a lot 2: if |S\S′| > ∆then 3: w′ ←UPDATE-GD(w, S, η, r, S′) 4: else 5: // If stable, use the FC update 6: w′ ←UPDATE-FC(w, S) 7: end if 8: return w′ Definition 2 (Hard Thresholding Operator). For any vector v ∈Rn, let σv ∈Sn be the permutation that orders elements of v in ascending order of their magnitudes i.e. vσv(1) ≤ vσv(2) ≤. . . ≤ vσv(n) . Then for any k ≤n, we define the hard thresholding operator as HT(v; k) = i ∈[n] : σ−1 v (i) ≤k Using this operator, we present our algorithm TORRENT (Algorithm 1) for robust regression. TORRENT follows a most natural iterative strategy of, alternately, estimating an active set of points which have the least residual error on the current regressor, and then updating the regressor to provide a better fit on this active set. We offer three variants of our algorithm, based on how aggressively the algorithm tries to fit the regressor to the current active set. We first propose a fully corrective algorithm TORRENT-FC (Algorithm 2) that performs a fully corrective least squares regression step in an effort to minimize the regression error on the active set. This algorithm makes significant progress in each step, but at a cost of more expensive updates. To address this, we then propose a milder, gradient descent-based variant TORRENT-GD (Algorithm 3) that performs a much cheaper update of taking a single step in the direction of the gradient of the objective function on the active set. This reduces the regression error on the active set but does not minimize it. This turns out to be beneficial in situations where dense noise is present along with sparse corruptions since it prevents the algorithm from overfitting to the current active set. Both the algorithms proposed above have their pros and cons – the FC algorithm provides significant improvements with each step, but is expensive to execute whereas the GD variant, although efficient in executing each step, offers slower progress. To get the best of both these algorithms, we propose a third, hybrid variant TORRENT-HYB (Algorithm 4) that adaptively selects either the FC or the GD update depending on whether the active set is stable across iterations or not. In the next section we show that this hard thresholding-based strategy offers a linear convergence rate for the algorithm in all its three variations. We shall also demonstrate the applicability of this technique to high dimensional sparse recovery settings in a subsequent section. 4 Convergence Guarantees For the sake of ease of exposition, we will first present our convergence analyses for cases where dense noise is not present i.e. y = X⊤w∗+ b and will handle cases with dense noise and sparse corruptions later. We first analyze the fully corrective TORRENT-FC algorithm. The convergence proof in this case relies on the optimality of the two steps carried out by the algorithm, the fully corrective step that selects the best regressor on the active set, and the hard thresholding step that discovers a new active set by selecting points with the least residual error on the current regressor. 4 Theorem 3. Let X = [x1, . . . , xn] ∈Rp×n be the given data matrix and y = XT w∗+ b be the corrupted output with ∥b∥0 ≤α · n. Let Algorithm 2 be executed on this data with the thresholding parameter set to β ≥α. Let Σ0 be an invertible matrix such that e X = Σ−1/2 0 X satisfies the SSC and SSS properties at level γ with constants λγ and Λγ respectively (see Definition 1). If the data satisfies (1+ √ 2)Λβ λ1−β < 1, then after t = O log 1 √n ∥b∥2 ϵ iterations, Algorithm 2 obtains an ϵ-accurate solution wt i.e. ∥wt −w∗∥2 ≤ϵ. Proof (Sketch). Let rt = y −X⊤wt be the vector of residuals at time t and Ct = XStX⊤ St. Also let S∗= supp(b) be the set of uncorrupted points. The fully corrective step ensures that wt+1 = C−1 t XStySt = C−1 t XSt X⊤ Stw∗+ bSt = w∗+ C−1 t XStbSt, whereas the hard thresholding step ensures that
rt+1 St+1
2 2 ≤
rt+1 S∗
2 2. Combining the two gives us
bSt+1
2 2 ≤
X⊤ S∗\St+1C−1 t XStbSt
2 2 + 2 · b⊤ St+1X⊤ St+1C−1 t XStbSt ζ1=
e X⊤ S∗\St+1 e XSt e XT St −1 e XStbSt
2 2 + 2 · b⊤ St+1 e X⊤ St+1 e XSt e XT St −1 e XStbSt ζ2≤ Λ2 β λ2 1−β · ∥bSt∥2 2 + 2 · Λβ λ1−β · ∥bSt∥2
bSt+1
2 , where ζ1 follows from setting e X = Σ−1/2 0 X and X⊤ S C−1 t XS′ = e X⊤ S ( e XSt e X⊤ St)−1 e XS′ and ζ2 follows from the SSC and SSS properties, ∥bSt∥0 ≤∥b∥0 ≤β · n and |S∗\St+1| ≤β · n. Solving the quadratic equation and performing other manipulations gives us the claimed result. Theorem 3 relies on a deterministic (fixed design) assumption, specifically (1+ √ 2)Λβ λ1−β < 1 in order to guarantee convergence. We can show that a large class of random designs, including Gaussian and sub-Gaussian designs actually satisfy this requirement. That is to say, data generated from these distributions satisfy the SSC and SSS conditions such that (1+ √ 2)Λβ λ1−β < 1 with high probability. Theorem 4 explicates this for the class of Gaussian designs. Theorem 4. Let X = [x1, . . . , xn] ∈Rp×n be the given data matrix with each xi ∼N(0, Σ). Let y = X⊤w∗+b and ∥b∥0 ≤α·n. Also, let α ≤β < 1 65 and n ≥Ω p + log 1 δ . Then, with probability at least 1−δ, the data satisfies (1+ √ 2)Λβ λ1−β < 9 10. More specifically, after T ≥10 log 1 √n ∥b∥2 ϵ iterations of Algorithm 1 with the thresholding parameter set to β, we have
wT −w∗
≤ϵ. Remark 2. Note that Theorem 4 provides rates that are independent of the condition number λmax(Σ) λmin(Σ) of the distribution. We also note that results similar to Theorem 4 can be proven for the larger class of sub-Gaussian distributions. We refer the reader to Section G for the same. Remark 3. We remind the reader that our analyses can readily accommodate dense noise in addition to sparse unbounded corruptions. We direct the reader to Appendix A which presents convergence proofs for our algorithms in these settings. Remark 4. We would like to point out that the design requirements made by our analyses are very mild when compared to existing literature. Indeed, the work of [4] assumes the Bouquet Model where distributions are restricted to be isotropic Gaussians whereas the work of [5] assumes a more stringent model of sub-orthonormal matrices, something that even Gaussian designs do not satisfy. Our analyses, on the other hand, hold for the general class of sub-Gaussian distributions. We now analyze the TORRENT-GD algorithm which performs cheaper, gradient-style updates on the active set. We will show that this method nevertheless enjoys a linear rate of convergence. Theorem 5. Let the data settings be as stated in Theorem 3 and let Algorithm 3 be executed on this data with the thresholding parameter set to β ≥α and the step length set to η = 1 Λ1−β . If the data 5 satisfies max η p Λβ, 1 −ηλ1−β ≤1 4, then after t = O log ∥b∥2 √n 1 ϵ iterations, Algorithm 1 obtains an ϵ-accurate solution wt i.e. ∥wt −w∗∥2 ≤ϵ. Similar to TORRENT-FC, the assumptions made by the TORRENT-GD algorithm are also satisfied by the class of sub-Gaussian distributions. The proof of Theorem 5, given in Appendix D, details these arguments. Given the convergence analyses for TORRENT-FC and GD, we now move on to provide a convergence analysis for the hybrid TORRENT-HYB algorithm which interleaves FC and GD steps. Since the exact interleaving adopted by the algorithm depends on the data, and not known in advance, this poses a problem. We address this problem by giving below a uniform convergence guarantee, one that applies to every interleaving of the FC and GD update steps. Theorem 6. Suppose Algorithm 4 is executed on data that allows Algorithms 2 and 3 a convergence rate of ηFC and ηGD respectively. Suppose we have 2·ηFC·ηGD < 1. Then for any interleavings of the FC and GD steps that the policy may enforce, after t = O log 1 √n ∥b∥2 ϵ iterations, Algorithm 4 ensures an ϵ-optimal solution i.e. ∥wt −w∗∥≤ϵ. We point out to the reader that the assumption made by Theorem 6 i.e. 2 · ηFC · ηGD < 1 is readily satisfied by random sub-Gaussian designs, albeit at the cost of reducing the noise tolerance limit. As we shall see, TORRENT-HYB offers attractive convergence properties, merging the fast convergence rates of the FC step, as well as the speed and protection against overfitting provided by the GD step. 5 High-dimensional Robust Regression In this section, we extend our approach to the robust high-dimensional sparse recovery setting. As before, we assume that the response vector y is obtained as: y = X⊤w∗+ b, where ∥b∥0 ≤α · n. However, this time, we also assume that w∗is s∗-sparse i.e. ∥w∗∥0 ≤s∗. As before, we shall neglect white/dense noise for the sake of simplicity. We reiterate that it is not possible to use existing results from sparse recovery (such as [11, 12]) directly to solve this problem. Our objective would be to recover a sparse model ˆw so that ∥ˆw −w∗∥2 ≤ϵ. The challenge here is to forgo a sample complexity of n ≳p and instead, perform recovery with n ∼s∗log p samples alone. For this setting, we modify the FC update step of TORRENT-FC method to the following: wt+1 ← inf ∥w∥0≤s X i∈St (yi −⟨w, xi⟩)2 , (3) for some target sparsity level s ≪p. We refer to this modified algorithm as TORRENT-HD. Assuming X satisfies the RSC/RSS properties (defined below), (3) can be solved efficiently using results from sparse recovery (for example the IHT algorithm [11, 14] analyzed in [12]). Definition 7 (RSC and RSS Properties). A matrix X ∈Rp×n will be said to satisfy the Restricted Strong Convexity Property (resp. Restricted Strong Smoothness Property) at level s = s1 + s2 with strong convexity constant αs1+s2 (resp. strong smoothness constant Ls1+s2) if the following holds for all ∥w1∥0 ≤s1 and ∥w2∥0 ≤s2: αs ∥w1 −w2∥2 2 ≤
X⊤(w1 −w2)
2 2 ≤Ls ∥w1 −w2∥2 2 For our results, we shall require the subset versions of both these properties. Definition 8 (SRSC and SRSS Properties). A matrix X ∈Rp×n will be said to satisfy the Subset Restricted Strong Convexity (resp. Subset Restricted Strong Smoothness) Property at level (γ, s) with strong convexity constant α(γ,s) (resp. strong smoothness constant L(γ,s)) if for all subsets S ∈Sγ, the matrix XS satisfies the RSC (resp. RSS) property at level s with constant αs (resp. Ls). We now state the convergence result for the TORRENT-HD algorithm. Theorem 9. Let X ∈Rp×n be the given data matrix and y = XT w∗+ b be the corrupted output with ∥w∗∥0 ≤s∗and ∥b∥0 ≤α · n. Let Σ0 be an invertible matrix such that Σ−1/2 0 X satisfies the SRSC and SRSS properties at level (γ, 2s+s∗) with constants α(γ,2s+s∗) and L(γ,2s+s∗) respectively (see Definition 8). Let Algorithm 2 be executed on this data with the TORRENT-HD update, thresholding parameter set to β ≥α, and s ≥32 L(1−β,2s+s∗) α(1−β,2s+s∗) . 6 Total Points Corrupted Points TORRENT−FC (p = 50 sigma = 0) 110 120 130 140 150 160 170 180 190 200 100 90 80 70 60 50 40 30 20 10 0 20 40 60 80 100 Total Points Corrupted Points TORRENT−HYB (p = 50 sigma = 0) 110 120 130 140 150 160 170 180 190 200 100 90 80 70 60 50 40 30 20 10 0 20 40 60 80 100 Total Points Corrupted Points L1−DALM (p = 50 sigma = 0) 110 120 130 140 150 160 170 180 190 200 100 90 80 70 60 50 40 30 20 10 0 20 40 60 80 100 0 10 20 0.1 0.15 0.2 0.25 Magnitude of Corruption ∥w −w∗∥2 p = 500 n = 2000 alpha = 0.25 sigma = 0.2 TORRENT−FC TORRENT−HYB L1−DALM (a) (b) (c) (d) Figure 1: (a), (b) and (c) Phase-transition diagrams depicting the recovery properties of the TORRENT-FC, TORRENT-HYB and L1 algorithms. The colors red and blue represent a high and low probability of success resp. A method is considered successful in an experiment if it recovers w∗upto a 10−4 relative error. Both variants of TORRENT can be seen to recover w∗in presence of larger number of corruptions than the L1 solver. (d) Variation in recovery error with the magnitude of corruption. As the corruption is increased, TORRENT-FC and TORRENT-HYB show improved performance while the problem becomes more difficult for the L1 solver. If X also satisfies 4L(β,s+s∗) α(1−β,s+s∗) < 1, then after t = O log 1 √n ∥b∥2 ϵ iterations, Algorithm 2 obtains an ϵ-accurate solution wt i.e. ∥wt −w∗∥2 ≤ϵ. In particular, if X is sampled from a Gaussian distribution N(0, Σ) and n ≥ Ω s∗· λmax(Σ) λmin(Σ) log p , then for all values of α ≤β < 1 65, we can guarantee ∥wt −w∗∥2 ≤ϵ after t = O log 1 √n ∥b∥2 ϵ iterations of the algorithm (w.p. ≥1 −1/n10). Remark 5. The sample complexity required by Theorem 9 is identical to the one required by analyses for high dimensional sparse recovery [12], save constants. Also note that TORRENT-HD can tolerate the same corruption index as TORRENT-FC. 6 Experiments Several numerical simulations were carried out on linear regression problems in low-dimensional, as well as sparse high-dimensional settings. The experiments show that TORRENT not only offers statistically better recovery properties as compared to L1-style approaches, but that it can be more than an order of magnitude faster as well. Data: For the low dimensional setting, the regressor w∗∈Rp was chosen to be a random unit norm vector. Data was sampled as xi ∼N(0, Ip) and response variables were generated as y∗ i = ⟨w∗, xi⟩. The set of corrupted points S∗was selected as a uniformly random (αn)-sized subset of [n] and the corruptions were set to bi ∼U (−5 ∥y∗∥∞, 5 ∥y∗∥∞) for i ∈S∗. The corrupted responses were then generated as yi = y∗ i + bi + εi where εi ∼N(0, σ2). For the sparse high-dimensional setting, supp(w∗) was selected to be a random s∗-sized subset of [p]. Phase-transition diagrams (Figure 1) were generated by repeating each experiment 100 times. For all other plots, each experiment was run over 20 random instances of the data and the plots were drawn to depict the mean results. Algorithms: We compared various variants of our algorithm TORRENT to the regularized L1 algorithm for robust regression [4, 5]. Note that the L1 problem can be written as minz ∥z∥1 s.t.Az = y, where A = X⊤1 λIm×m and z∗= [w∗⊤λb⊤]⊤. We used the Dual Augmented Lagrange Multiplier (DALM) L1 solver implemented by [15] to solve the L1 problem. We ran a fine tuned grid search over the λ parameter for the L1 solver and quoted the best results obtained from the search. In the low-dimensional setting, we compared the recovery properties of TORRENT-FC (Algorithm 2) and TORRENT-HYB (Algorithm 4) with the DALM-L1 solver, while for the high-dimensional case, we compared TORRENT-HD against the DALM-L1 solver. Both the L1 solver, as well as our methods, were implemented in Matlab and were run on a single core 2.4GHz machine with 8 GB RAM. Choice of L1-solver: An extensive comparative study of various L1 minimization algorithms was performed by [15] who showed that the DALM and Homotopy solvers outperform other counterparts both in terms of recovery properties, and timings. We extended their study to our observation model and found the DALM solver to be significantly better than the other L1 solvers; see Figure 3 in the appendix. We also observed, similar to [15], that the Approximate Message Passing (AMP) solver diverges on our problem as the input matrix to the L1 solver is a non-Gaussian matrix A = [XT 1 λI]. 7 0 0.2 0.4 0.6 10 0 Fraction of Corrupted Points ∥w −w∗∥2 p = 500 n = 2000 sigma = 0.2 TORRENT−FC TORRENT−HYB L1−DALM 0 2 4 6 8 10 12 1e−5 1e−4 1e−3 1e−2 1e−1 p = 300 n = 1800 alpha = 0.41 kappa = 5 Time (in Sec) ∥w −w∗∥2 TORRENT−FC TORRENT−HYB TORRENT−GD L1−DALM 0 0.2 0.4 0.6 0.8 0 0.5 1 p = 10000 n = 2303 s = 50 Fraction of Corrupted Points ∥w −w∗∥2 TORRENT− HD L1−DALM 0 100 200 300 400 0 1 2 3 Time (in Sec) ∥w −w∗∥2 p = 50000 n = 5410 alpha = 0.4 s = 100 TORRENT−HD L1−DALM (a) (b) (c) (d) Figure 2: In low-dimensional (a,b), as well as sparse high dimensional (c,d) settings, TORRENT offers better recovery as the fraction of corrupted points α is varied. In terms of runtime, TORRENT is an order of magnitude faster than L1 solvers in both settings. In the low-dim. setting, TORRENT-HYB is the fastest of all the variants. Evaluation Metric: We measure the performance of various algorithms using the standard L2 error: r bw = ∥bw −w∗∥2. For the phase-transition plots (Figure 1), we deemed an algorithm successful on an instance if it obtained a model bw with error r bw < 10−4 · ∥w∗∥2. We also measured the CPU time required by each of the methods, so as to compare their scalability. 6.1 Low Dimensional Results Recovery Property: The phase-transition plots presented in Figure 1 represent our recovery experiments in graphical form. Both the fully-corrective and hybrid variants of TORRENT show better recovery properties than the L1-minimization approach, indicated by the number of runs in which the algorithm was able to correctly recover w∗out of a 100 runs. Figure 2 shows the variation in recovery error as a function of α in the presence of white noise and exhibits the superiority of TORRENT-FC and TORRENT-HYB over L1-DALM. Here again, TORRENT-FC and TORRENT-HYB achieve significantly lesser recovery error than L1-DALM for all α <= 0.5. Figure 3 in the appendix show that the variations of ∥bw −w∗∥2 with varying p, σ and n follow a similar trend with TORRENT having significantly lower recovery error in comparison to the L1 approach. Figure 1(d) brings out an interesting trend in the recovery property of TORRENT. As we increase the magnitude of corruption from U (−∥y∗∥∞, ∥y∗∥∞) to U (−20 ∥y∗∥∞, 20 ∥y∗∥∞), the recovery error for TORRENT-HYB and TORRENT-FC decreases as expected since it becomes easier to identify the grossly corrupted points. However the L1-solver was unable to exploit this observation and in fact exhibited an increase in recovery error. Run Time: In order to ascertain the recovery guarantees for TORRENT on ill-conditioned problems, we performed an experiment where data was sampled as xi ∼N(0, Σ) where diag(Σ) ∼U(0, 5). Figure 2 plots the recovery error as a function of time. TORRENT-HYB was able to correctly recover w∗about 50× faster than L1-DALM which spent a considerable amount of time pre-processing the data matrix X. Even after allowing the L1 algorithm to run for 500 iterations, it was unable to reach the desired residual error of 10−4. Figure 2 also shows that our TORRENT-HYB algorithm is able to converge to the optimal solution much faster than TORRENT-FC or TORRENT-GD. This is because TORRENT-FC solves a least square problem at each step and thus, even though it requires significantly fewer iterations to converge, each iteration in itself is very expensive. While each iteration of TORRENT-GD is cheap, it is still limited by the slow O (1 −1 κ)t convergence rate of the gradient descent algorithm, where κ is the condition number of the covariance matrix. TORRENT-HYB, on the other hand, is able to combine the strengths of both the methods to achieve faster convergence. 6.2 High Dimensional Results Recovery Property: Figure 2 shows the variation in recovery error in the high-dimensional setting as the number of corrupted points was varied. For these experiments, n was set to 5s∗log(p) and the fraction of corrupted points α was varied from 0.1 to 0.7. While L1-DALM fails to recover w∗ for α > 0.5, TORRENT-HD offers perfect recovery even for α values upto 0.7. Run Time: Figure 2 shows the variation in recovery error as a function of run time in this setting. L1-DALM was found to be an order of magnitude slower than TORRENT-HD, making it infeasible for sparse high-dimensional settings. One key reason for this is that the L1-DALM solver is significantly slower in identifying the set of clean points. For instance, whereas TORRENT-HD was able to identify the clean set of points in only 5 iterations, it took L1 around 250 iterations to do the same. 8 References [1] Christoph Studer, Patrick Kuppinger, Graeme Pope, and Helmut B¨olcskei. Recovery of Sparsely Corrupted Signals. IEEE Transaction on Information Theory, 58(5):3115–3130, 2012. [2] Peter J. Rousseeuw and Annick M. Leroy. Robust Regression and Outlier Detection. John Wiley and Sons, 1987. [3] John Wright, Alan Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma. Robust Face Recognition via Sparse Representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227, 2009. [4] John Wright and Yi Ma. Dense Error Correction via ℓ1 Minimization. IEEE Transaction on Information Theory, 56(7):3540–3560, 2010. [5] Nam H. Nguyen and Trac D. Tran. Exact recoverability from dense corrupted observations via L1 minimization. IEEE Transaction on Information Theory, 59(4):2036–2058, 2013. [6] Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust Sparse Regression under Adversarial Corruption. In 30th International Conference on Machine Learning (ICML), 2013. [7] Brian McWilliams, Gabriel Krummenacher, Mario Lucic, and Joachim M. Buhmann. Fast and Robust Least Squares Estimation in Corrupted Linear Models. In 28th Annual Conference on Neural Information Processing Systems (NIPS), 2014. [8] Adrien-Marie Legendre (1805). On the Method of Least Squares. In (Translated from the French) D.E. Smith, editor, A Source Book in Mathematics, pages 576–579. New York: Dover Publications, 1959. [9] Peter J. Rousseeuw. Least Median of Squares Regression. Journal of the American Statistical Association, 79(388):871–880, 1984. [10] Peter J. Rousseeuw and Katrien Driessen. Computing LTS Regression for Large Data Sets. Journal of Data Mining and Knowledge Discovery, 12(1):29–45, 2006. [11] Thomas Blumensath and Mike E. Davies. Iterative Hard Thresholding for Compressed Sensing. Applied and Computational Harmonic Analysis, 27(3):265–274, 2009. [12] Prateek Jain, Ambuj Tewari, and Purushottam Kar. On Iterative Hard Thresholding Methods for High-dimensional M-Estimation. In 28th Annual Conference on Neural Information Processing Systems (NIPS), 2014. [13] Yiyuan She and Art B. Owen. Outlier Detection Using Nonconvex Penalized Regression. arXiv:1006.2592 (stat.ME). [14] Rahul Garg and Rohit Khandekar. Gradient descent with sparsification: an iterative algorithm for sparse recovery with restricted isometry property. In 26th International Conference on Machine Learning (ICML), 2009. [15] Allen Y. Yang, Arvind Ganesh, Zihan Zhou, Shankar Sastry, and Yi Ma. A Review of Fast ℓ1-Minimization Algorithms for Robust Face Recognition. CoRR abs/1007.3753, 2012. [16] Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. The Annals of Statistics, 28(5):1302–1338, 2000. [17] Thomas Blumensath. Sampling and reconstructing signals from a union of linear subspaces. IEEE Transactions on Information Theory, 57(7):4660–4671, 2011. [18] Roman Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Y. Eldar and G. Kutyniok, editors, Compressed Sensing, Theory and Applications, chapter 5, pages 210–268. Cambridge University Press, 2012. 9 | 2015 | 49 |
5,938 | Stochastic Online Greedy Learning with Semi-bandit Feedbacks Tian Lin Tsinghua University Beijing, China lintian06@gmail.com Jian Li Tsinghua University Beijing, China lapordge@gmail.com Wei Chen Microsoft Research Beijing, China weic@microsoft.com Abstract The greedy algorithm is extensively studied in the field of combinatorial optimization for decades. In this paper, we address the online learning problem when the input to the greedy algorithm is stochastic with unknown parameters that have to be learned over time. We first propose the greedy regret and ϵ-quasi greedy regret as learning metrics comparing with the performance of offline greedy algorithm. We then propose two online greedy learning algorithms with semi-bandit feedbacks, which use multi-armed bandit and pure exploration bandit policies at each level of greedy learning, one for each of the regret metrics respectively. Both algorithms achieve O(log T) problem-dependent regret bound (T being the time horizon) for a general class of combinatorial structures and reward functions that allow greedy solutions. We further show that the bound is tight in T and other problem instance parameters. 1 Introduction The greedy algorithm is simple and easy-to-implement, and can be applied to solve a wide range of complex optimization problems, either with exact solutions (e.g. minimum spanning tree [19, 25]) or approximate solutions (e.g. maximum coverage [11] or influence maximization [17]). Moreover, for many practical problems, the greedy algorithm often serves as the first heuristic of choice and performs well in practice even when it does not provide a theoretical guarantee. The classical greedy algorithm assumes that a certain reward function is given, and it constructs the solution iteratively. In each phase, it searches for a local optimal element to maximize the marginal gain of reward, and add it to the solution. We refer to this case as the offline greedy algorithm with a given reward function, and the corresponding problem the offline problems. The phase-by-phase process of the greedy algorithm naturally forms a decision sequence to illustrate the decision flow in finding the solution, which is named as the greedy sequence. We characterize the decision class as an accessible set system, a general combinatorial structure encompassing many interesting problems. In many real applications, however, the reward function is stochastic and is not known in advance, and the reward is only instantiated based on the unknown distribution after the greedy sequence is selected. For example, in the influence maximization problem [17], social influence are propagated in a social network from the selected seed nodes following a stochastic model with unknown parameters, and one wants to find the optimal seed set of size k that generates the largest influence spread, which is the expected number of nodes influenced in a cascade. In this case, the reward of seed selection is only instantiated after the seed selection, and is only one of the random outcomes. Therefore, when the stochastic reward function is unknown, we aim at maximizing the expected reward overtime while gradually learning the key parameters of the expected reward functions. This falls in the domain of online learning, and we refer the online algorithm as the strategy of the player, who makes sequential decisions, interacts with the environment, obtains feedbacks, and accumulates 1 her reward. For online greedy algorithms in particular, at each time step the player selects and plays a candidate decision sequence while the environment instantiates the reward function, and then the player collects the values of instantiated function at every phase of the decision sequence as the feedbacks (thus the name of semi-bandit feedbacks [2]), and takes the value of the final phase as the reward cumulated in this step. The typical objective for an online algorithm is to make sequential decisions against the optimal solution in the offline problem where the reward function is known a priori. For online greedy algorithms, instead, we compare it with the solution of the offline greedy algorithm, and minimize their gap of the cumulative reward over time, termed as the greedy regret. Furthermore, in some problems such as influence maximization, the reward function is estimated with error even for the offline problem [17] and thus the greedily selected element at each phase may contain some ϵ error. We call such greedy sequence as ϵ-quasi greedy sequence. To accommodate these cases, we also define the metric of ϵ-quasi greedy regret, which compares the online solution against the minimum offline solution from all ϵ-quasi greedy sequences. In this paper, we propose two online greedy algorithms targeted at two regret metrics respectively. The first algorithm OG-UCB uses the stochastic multi-armed bandit (MAB) [22, 8], in particular the well-known UCB policy [3] as the building block to minimize the greedy regret. We apply the UCB policy to every phase by associating the confidence bound to each arm, and then choose the arm having the highest upper confidence bound greedily in the process of decision. For the second scenario where we allow tolerating ϵ-error for each phase, we propose a first-explore-then-exploit algorithm OG-LUCB to minimize the ϵ-quasi greedy regret. For every phase in the greedy process, OG-LUCB applies the LUCB policy [16, 9] which depends on the upper and lower confidence bound to eliminate arms. It first explores each arm until the lower bound of one arm is higher than the upper bound of any other arm within an ϵ-error, then the stage of current phase is switched to exploit that best arm, and continues to the next phase. Both OG-UCB and OG-LUCB achieve the problemdependent O(log T) bound in terms of the respective regret metrics, where the coefficients in front of T depends on direct elements along the greedy sequence (a.k.a., its decision frontier) corresponding to the instance of learning problem. The two algorithms have complementary advantages: when we really target at greedy regret (setting ϵ to 0 for OG-LUCB), OG-UCB has a slightly better regret guarantee and does not need an artificial switch between exploration and exploitation; when we are satisfied with ϵ-quasi greedy regret, OG-LUCB works but OG-UCB cannot be adapted for this case and may suffer a larger regret. We also show a problem instance in this paper, where the upper bound is tight to the lower bound in T and other problem parameters. We further show our algorithms can be easily extended to the knapsack problem, and applied to the stochastic online maximization for consistent functions and submodular functions, etc., in the supplementary material. To summarize, our contributions include the following: (a) To the best of our knowledge, we are the first to propose the framework using the greedy regret and ϵ-quasi greedy regret to characterize the online performance of the stochastic greedy algorithm for different scenarios, and it works for a wide class of accessible set systems and general reward functions; (b) We propose Algorithms OGUCB and OG-LUCB that achieve the problem-dependent O(log T) regret bound; and (c) We also show that the upper bound matches with the lower bound (up to a constant factor). Due to the space constraint, the analysis of algorithms, applications and empirical evaluation of the lower bound are moved to the supplementary material. Related Work. The multi-armed bandit (MAB) problem for both stochastic and adversarial settings [22, 4, 6] has been widely studied for decades. Most work focus on minimizing the cumulative regret over time [3, 14], or identifying the optimal solution in terms of pure exploration bandits [1, 16, 7]. Among those work, there is one line of research that generalizes MAB to combinatorial learning problems [8, 13, 2, 10, 21, 23, 9]. Our paper belongs to this line considering stochastic learning with semi-bandit feedbacks, while we focus on the greedy algorithm, the structure and its performance measure, which have not been addressed. The classical greedy algorithms in the offline setting are studied in many applications [19, 25, 11, 5], and there is a line of work [15, 18] focusing on characterizing the greedy structure for solutions. We adopt their characterizations of accessible set systems to the online setting of the greedy learning. There is also a branch of work using the greedy algorithm to solve online learning problem, while 2 they require the knowledge of the exact form of reward function, restricting to special functions such as linear [2, 20] and submodular rewards [26, 12]. Our work does not assume the exact form, and it covers a much larger class of combinatorial structures and reward functions. 2 Preliminaries Online combinatorial learning problem can be formulated as a repeated game between the environment and the player under stochastic multi-armed bandit framework. Let E = {e1, e2, . . . , en} be a finite ground set of size n, and F be a collection of subsets of E. We consider the accessible set system (E, F) satisfying the following two axioms: (1) ∅∈F; (2) If S ∈ F and S ̸= ∅, then there exists some e in E, s.t., S \{e} ∈F. We define any set S ⊆E as a feasible set if S ∈F. For any S ∈F, its accessible set is defined as N(S) := {e ∈E \ S : S ∪{e} ∈F}. We say feasible set S is maximal if N(S) = ∅. Define the largest length of any feasible set as m := maxS∈F |S| (m ≤n), and the largest width of any feasible set as W := maxS∈F |N(S)| (W ≤n). We say that such an accessible set system (E, F) is the decision class of the player. In the class of combinatorial learning problems, the size of F is usually very large (e.g., exponential in m, W and n). Beginning with an empty set, the accessible set system (E, F) ensures that any feasible set S can be acquired by adding elements one by one in some order (cf. Lemma A.1 in the supplementary material for more details), which naturally forms the decision process of the player. For convenience, we say the player can choose a decision sequence, defined as an ordered feasible sets σ := ⟨S0, S1, . . . , Sk⟩∈Fk+1 satisfying that ∅= S0 ⊂S1 ⊂· · · ⊂Sk and for any i = 1, 2, . . . , k, Si = Si−1 ∪{si} where si ∈N(Si−1). Besides, define decision sequence σ as maximal if and only if Sk is maximal. Let Ωbe an arbitrary set. The environment draws i.i.d. samples from Ωas ω1, ω2, . . . , at each time t = 1, 2, . . . , by following a predetermined but unknown distribution. Consider reward function f : F × Ω→R that is bounded, and it is non-decreasing1 in the first parameter, while the exact form of function is agnostic to the player. We use a shorthand ft(S) := f(S, ωt) to denote the reward for any given S at time t, and denote the expected reward as f(S) := Eω1 [f1(S)], where the expectation Eωt is taken from the randomness of the environment at time t. For ease of presentation, we assume that the reward function for any time t is normalized with arbitrary alignment as follows: (1) ft(∅) = L (for any constant L ≥0); (2) for any S ∈F, e ∈N(S), ft(S ∪{e})−ft(S) ∈[0, 1]. Therefore, reward function f(·, ·) is implicitly bounded within [L, L + m]. We extend the concept of arms in MAB, and introduce notation a := e|S to define an arm, representing the selected element e based on the prefix S, where S is a feasible set and e ∈N(S); and define A := {e|S : ∀S ∈F, ∀e ∈N(S)} as the arm space. Then, we can define the marginal reward for function ft as ft(e|S) := ft(S ∪{e}) −ft(S), and the expected marginal reward for f as f(e|S) := f(S ∪{e})−f(S). Notice that the use of arms characterizes the marginal reward, and also indicates that it is related to the player’s previous decision. 2.1 The Offline Problem and The Offline Greedy Algorithm In the offline problem, we assume that f is provided as a value oracle. Therefore, the objective is to find the optimal solution S∗= arg maxS∈F f(S), which only depends on the player’s decision. When the optimal solution is computationally hard to obtain, usually we are interested in finding a feasible set S+ ∈F such that f(S+) ≥αf(S∗) where α ∈(0, 1], then S+ is called an αapproximation solution. That is a typical case where the greedy algorithm comes into play. The offline greedy algorithm is a local search algorithm that refines the solution phase by phase. It goes as follows: (a) Let G0 = ∅; (b) For each phase k = 0, 1, . . . , find gk+1 = arg maxe∈N (Gk) f(e|Gk), and let Gk+1 = Gk ∪{gk+1}; (c) The above process ends when N(Gk+1) = ∅(Gk+1 is maximal). We define the maximal decision sequence σG := ⟨G0, G1, . . . , GmG⟩(mG is its length) found by the offline greedy as the greedy sequence. For simplicity, we assume that it is unique. 1Therefore, the optimal solution is a maximal decision sequence. 3 One important feature is that the greedy algorithm uses a polynomial number of calls (poly(m, W, n)) to the offline oracle, even though the size of F or A may be exponentially large. In some cases such as the offline influence maximization problem [17], the value of f(·) can only be accessed with some error or estimated approximately. Sometimes, even though f(·) can be computed exactly, we may only need an approximate maximizer in each greedy phase in favor of computational efficiency (e.g., efficient submodular maximization [24]). To capture such scenarios, we say a maximal decision sequence σ = ⟨S0, S1, . . . , Sm′⟩is an ϵ-quasi greedy sequence (ϵ ≥0), if the greedy decision can tolerate ϵ error every phase, i.e., for each k = 0, 1, . . . , m′−1 and Sk+1 = Sk∪{sk+1}, f(sk+1|Sk) ≥maxs∈N (Sk) f(s|Sk)−ϵ. Notice that there could be many ϵ-quasi greedy sequences, and we denote σQ := ⟨Q0, Q1, . . . , QmQ⟩(mQ is its length) as the one with the minimum reward, that is f(QmQ) is minimized over all ϵ-quasi greedy sequences. 2.2 The Online Problem In the online case, in constrast f is not provided. The player can only access one of functions f1, f2, . . . , generated by the environment, for each time step during a repeated game. For each time t, the game proceeds in the following three steps: (1) The environment draws i.i.d. sample ωt ∈Ωfrom its predetermined distribution without revealing it; (2) the player may, based on her previous knowledge, select a decision sequence σt = ⟨S0, S1, . . . , Smt⟩, which reflects the process of her decision phase by phase; (3) then, the player plays σt and gains reward ft(Smt), while observes intermediate feedbacks ft(S0), ft(S1), . . . , ft(Smt) to update her knowledge. We refer such feedbacks as semi-bandit feedbacks in the decision order. For any time t = 1, 2, . . . , denote σt = ⟨St 0, St 1, . . . , St mt⟩and St := St mt. The player is to make sequential decisions, and the classical objective is to minimize the cumulative gap of rewards against the optimal solution [3] or the approximation solution [10]. For example, when the optimal solution S∗= arg maxS∈F E [f1(S)] can be solved in the offline problem, we minimize the expected cumulative regret R(T) := T · E [f1(S∗)] −PT t=1 E [ft(St)] over the time horizon T, where the expectation is taken from the randomness of the environment and the possible random algorithm of the player. In this paper, we are interested in online algorithms that are comparable to the solution of the offline greedy algorithm, namely the greedy sequence σG = ⟨G0, G1, . . . , GmG⟩. Thus, the objective is to minimize the greedy regret defined as RG(T) := T · E [f1(GmG)] − T X t=1 E ft(St) . (1) Given ϵ ≥0, we define the ϵ-quasi greedy regret as RQ(T) := T · E[f1(QmQ)] − T X t=1 E ft(St) , (2) where σQ = ⟨Q0, Q1, . . . , QmQ⟩is the minimum ϵ-quasi greedy sequence. We remark that if the offline greedy algorithm provides an α-approximation solution (with 0 < α ≤ 1), then the greedy regret (or ϵ-quasi greedy regret) also provides α-approximation regret, which is the regret comparing to the α fraction of the optimal solution, as defined in [10]. In the rest of the paper, our goal is to design the player’s policy that is comparable to the offline greedy, in other words, RG(T)/T = f(GmG) −1 T PT t=1 E [ft(St)] = o(1). Thus, to achieve sublinear greedy regret RG(T) = o(T) is our main focus. 3 The Online Greedy and Algorithm OG-UCB In this section, we propose our Online Greedy (OG) algorithm with the UCB policy to minimize the greedy regret (defined in (1)). For any arm a = e|S ∈A, playing a at each time t yields the marginal reward as a random variable Xt(a) = ft(a), in which the random event ωt ∈Ωis i.i.d., and we denote µ(a) as its true mean (i.e., 4 Algorithm 1 OG Require: MaxOracle 1: for t = 1, 2, . . . do 2: S0 ←∅; k ←0; h0 ←true 3: repeat ▷online greedy procedure 4: A ←{e|Sk : ∀e ∈N(Sk)}; t′ ←P a∈A N(a) + 1 5: (sk+1|Sk, hk) ←MaxOracle A, ˆX(·), N(·), t′ ▷find the current maximal 6: Sk+1 ←Sk ∪{sk+1}; k ←k + 1 7: until N(Sk) = ∅ ▷until a maximal sequence is found 8: Play sequence σt ←⟨S0, . . . , Sk⟩, observe {ft(S0), . . . , ft(Sk)}, and gain ft(Sk). 9: for all i = 1, 2, . . . , k do ▷update according to signals from MaxOracle 10: if h0, h1, · · · , hi−1 are all true then 11: Update ˆX(si|Si−1) and N(si|Si−1) according to (3). Subroutine 2 UCB(A, ˆX(·), N(·), t) to implement MaxOracle Setup: confidence radius radt(a) := q 3 ln t 2N(a), for each a ∈A 1: if ∃a ∈A, ˆX(a) is not initialized then ▷break ties arbitrarily 2: return (a, true) ▷to initialize arms 3: else ▷apply UCB’s rule 4: I+ t ←arg maxa∈A n ˆX(a) + radt(a) o , and return (I+ t , true) µ(a) := E [X1(a)]). Let ˆX(a) be the empirical mean for the marginal reward of a, and N(a) be the counter of the plays. More specifically, denote ˆXt(a) and Nt(a) for particular ˆX(a) and N(a) at the beginning of the time step t, and they are evaluated as follows: ˆXt(a) = Pt−1 i=1 fi(a)Ii(a) Pt−1 i=1 Ii(a) , Nt(a) = t−1 X i=1 Ii(a), (3) where Ii(a) ∈{0, 1} indicates whether a is updated at time i. In particular, assume that our algorithm is lazy-initialized so that each ˆX(a) and N(a) is 0 by default, until a is played. The Online Greedy algorithm (OG) proposed in Algorithm 1 serves as a meta-algorithm allowing different implementations of Subroutine MaxOracle. For every time t, OG calls MaxOracle (Line 5, to be specified later) to find the local maximal phase by phase, until the decision sequence σt is made. Then, it plays sequence σt, observes feedbacks and gains the reward (Line 8). Meanwhile, OG collects the Boolean signals (hk) from MaxOracle during the greedy process (Line 5), and update estimators ˆX(·) and N(·) according to those signals (Line 10). On the other hand, MaxOracle takes accessible arms A, estimators ˆX(·), N(·), and counted time t′, and returns an arm from A and signal hk ∈{true, false} to instruct OG whether to update estimators for the following phase. The classical UCB [3] can be used to implement MaxOracle, which is described in Subroutine 2. We term our algorithm OG, in which MaxOracle is implemented by Subroutine 2 UCB, as Algorithm OG-UCB. A few remarks are in order: First, Algorithm OG-UCB chooses an arm with the highest upper confidence bound for each phase. Second, the signal hk is always true, meaning that OG-UCB always update empirical means of arms along the decision sequence. Third, because we use lazy-initialized ˆX(·) and N(·), the memory is allocated only when it is needed. 3.1 Regret Bound of OG-UCB For any feasible set S, define the greedy element for S as g∗ S := arg maxe∈N (S) f(e|S), and we use N−(S) := N(S) \ {g∗ S} for convenience. Denote F† := {S ∈F : S is maximal} as the collection of all maximal feasible sets in F. We use the following gaps to measure the performance of the algorithm. 5 Definition 3.1 (Gaps). The gap between the maximal greedy feasible set GmG and any S ∈F is defined as ∆(S) := f(GmG) −f(S) if it is positive, and 0 otherwise. We define the maximum gap as ∆max = f(GmG) −minS∈F† f(S), which is the worst penalty for any maximal feasible set. For any arms a = e|S ∈A, we define the unit gap of a (i.e., the gap for one phase) as ∆(a) = ∆(e|S) := f(g∗ S|S) −f(e|S), e ̸= g∗ S f(g∗ S|S) −maxe′∈N−(S) f(e′|S), e = g∗ S . (4) For any arms a = e|S ∈A, we define the sunk-cost gap (irreversible once selected) as ∆∗(a) = ∆∗(e|S) := max f(GmG) − min V :V ∈F†,S∪{e}≺V f(V ), 0 , (5) where for two feasible sets A and B, A ≺B means that A is a prefix of B in some decision sequence, that is, there exists a decision sequence σ = ⟨S0 = ∅, S1, . . . , Sk⟩such that Sk = B and for some j < k, Sj = A. Thus, ∆∗(e|S) means the largest gap we may have after we have fixed our prefix selection to be S ∪{e}, and is upper bounded by ∆max. Definition 3.2 (Decision frontier). For any decision sequence σ = ⟨S0, S1, . . . , Sk⟩, define decision frontier Γ(σ) := Sk i=1 {e|Si−1 : e ∈N(Si−1)} ⊆A as the arms need to be explored in the decision sequence σ, and Γ−(σ) := Sk i=1 {e|Si−1 : ∀e ∈N−(Si−1)} similarly. Theorem 3.1 (Greedy regret bound). For any time T, Algorithm OG-UCB (Algorithm 1 with Subroutine 2) can achieve the greedy regret RG(T) ≤ X a∈Γ−(σG) 6∆∗(a) · ln T ∆(a)2 + π2 3 + 1 ∆∗(a) , (6) where σG is the greedy decision sequence. When m = 1, the above theorem immediately recovers the regret bound of the classical UCB [3] (with ∆∗(a) = ∆(a)). The greedy regret is bounded by O mW ∆max log T ∆2 where ∆is the minimum unit gap (∆= mina∈A ∆(a)), and the memory cost is at most proportional to the regret. For a special class of linear bandits, a simple extension where we treat arms e|S and e|S′ as the same can make OG-UCB essentially the same as OMM in [20], while the regret is O( n ∆log T) and the memory cost is O(n) (cf. Appendix F.1 of the supplementary material). 4 Relaxing the Greedy Sequence with ϵ-Error Tolerance In this section, we propose an online algorithm called OG-LUCB, which learns an ϵ-quasi greedy sequence, with the goal of minimizing the ϵ-quasi greedy regret (in (2)). We learn ϵ-quasi-greedy sequences by a first-explore-then-exploit policy, which utilizes results from PAC learning with a fixed confidence setting. In Section 4.1, we implement MaxOracle via the LUCB policy, and derive its exploration time; we then assume the knowledge of time horizon T in Section 4.2, and analyze the ϵ-quasi greedy regret; and in Section 4.3, we show that the assumption of knowing T can be further removed. 4.1 OG with a first-explore-then-exploit policy Given ϵ ≥0 and failure probability δ ∈(0, 1), we use Subroutine 3 LUCBϵ,δ to implement the subroutine MaxOracle in Algorithm OG. We call the resulting algorithm OG-LUCBϵ,δ. Specifically, Subroutine 3 is adapted from CLUCB-PAC in [9], and specialized to explore the top-one element in the support of [0, 1] (i.e., set R = 1 2, width(M) = 2 and Oracle = arg max in [9]). Assume that Iexploit(·) is lazy-initialized. For each greedy phase, the algorithm first explores each arm in A in the exploration stage, during which the return flag (the second return field) is always false; when the optimal one is found (initialize Iexploit(A) with ˆIt), it sticks to Iexploit(A) in the exploitation stage for the subsequent time steps, and return flag for this phase becomes true. The main algorithm OG then uses these flags in such a way that it updates arm estimates for phase i if any only if all phases 6 Subroutine 3 LUCBϵ,δ(A, ˆX(·), N(·), t) to implement MaxOracle Setup: radt(a) := q ln(4W t3/δ) 2N(a) , for each a ∈A; Iexploit(·) to cache arms for exploitation; 1: if Iexploit(A) is initialized then return (Iexploit(A), true) ▷in the exploitation stage 2: if ∃a ∈A, ˆX(a) is not initialized then ▷break ties arbitrarily 3: return (a, false) ▷to initialize arms 4: else 5: ˆIt ←arg maxa∈A ˆX(a) 6: ∀a ∈A, X′(a) ← ( ˆX(a) + radt(a), a ̸= ˆIt ˆX(a) −radt(a), a = ˆIt ▷perturb arms 7: I′ t ←arg maxa∈A X′(a) 8: if X′(I′ t) −X′(ˆIt) > ϵ then ▷not separated 9: I′′ t ←arg maxi∈{ˆIt,I′ t} radt(i), and return (I′′ t , false) ▷in the exploration stage 10: else ▷separated 11: Iexploit(A) ←ˆIt ▷initialize Iexploit(A) with ˆIt 12: return (Iexploit(A), true) ▷in the exploitation stage for j < i are already in the exploitation stage. This avoids maintaining useless arm estimates and is a major memory saving comparing to OG-UCB. In Algorithm OG-LUCBϵ,δ, we define the total exploration time T E = T E(δ), such that for any time t ≥T E, OG-LUCBϵ,δ is in the exploitation stage for all greedy phases encountered in the algorithm. This also means that after time T E, in every step we play the same maximal decision sequence σ = ⟨S0, S1, · · · , Sk⟩∈Fk+1, which we call a stable sequence. Following a common practice, we define the hardness coefficient with prefix S ∈F as Hϵ S := X e∈N (S) 1 max {∆(e|S)2, ϵ2}, where ∆(e|S) is defined in (4). (7) Rewrite definitions with respect to the ϵ-quasi regret. Recall that σQ = ⟨Q0, Q1, . . . , QmQ⟩is the minimum ϵ-quasi greedy sequence. In this section, we rewrite the gap ∆(S) := max{f(QmQ)− f(S), 0} for any S ∈F, the maximum gap ∆max := f(QmQ) −minS∈F† f(S), and ∆∗(a) = ∆∗(e|S) := max f(QmQ) −minV :V ∈F†,S∪{e}≺V f(V ), 0 , for any arm a = e|S ∈A. The following theorem shows that, with high probability, we can find a stable ϵ-quasi greedy sequence, and the total exploration time is bounded. Theorem 4.1 (High probability exploration time). Given any ϵ ≥0 and δ ∈(0, 1), suppose after the total exploration time T E = T E(δ), Algorithm OG-LUCBϵ,δ (Algorithm 1 with Subroutine 3) sticks to a stable sequence σ = ⟨S0, S1, · · · , Sm′⟩where m′ is its length. With probability at least 1 −mδ, the following claims hold: (1) σ is an ϵ-quasi greedy sequence; (2) The total exploration time satisfies that T E ≤127 Pm′−1 k=0 Hϵ S ln (1996WHϵ S/δ) , 4.2 Time Horizon T is Known Knowing time horizon T, we may let δ = 1 T in OG-LUCBϵ,δ to derive the ϵ-quasi regret as follows. Theorem 4.2. Given any ϵ ≥0. When total time T is known, let Algorithm OG-LUCBϵ,δ run with δ = 1 T . Suppose σ = ⟨S0, S1, · · · , Sm′ ⟩is the sequence selected at time T. Define function RQ,σ(T) := P e|S∈Γ(σ) ∆∗(e|S) min n 127 ∆(e|S)2 , 113 ϵ2 o ln (1996WHϵ ST) + ∆maxm, where m is the largest length of a feasible set and Hϵ S is defined in (7). Then, the ϵ-quasi regret satisfies that RQ(T) ≤RQ,σ(T) = O( W m∆max max{∆2,ϵ2} log T), where ∆is the minimum unit gap. In general, the two bounds (Theorem 3.1 and Theorem 4.2) are for different regret metrics, thus can not be directly compared. When ϵ = 0, OG-UCB is slightly better only in the constant before log T. On other hand, when we are satisfied with ϵ-quasi greedy regret, OG-LUCBϵ,δ may work better for 7 Algorithm 4 OG-LUCB-R (i.e., OG-LUCB with Restart) Require: ϵ 1: for epoch ℓ= 1, 2, · · · do 2: Clean ˆX(·) and N(·) for all arms, and restart OG-LUCBϵ,δ with δ = 1 φℓ(defined in (8)). 3: Run OG-LUCBϵ,δ for φℓtime steps. (exit halfway, if the time is over.) some large ϵ, for the bound takes the maximum (in the denominator) of the problem-dependent term ∆(e|S) and the fixed constant ϵ term, and the memory cost is only O(mW). 4.3 Time Horizon T is not Known When time horizon T is not known, we can apply the “squaring trick”, and restart the algorithm for each epoch as follows. Define the duration of epoch ℓas φℓ, and its accumulated time as τℓ, where φℓ:= e2ℓ; τℓ:= ( 0, ℓ= 0 Pℓ s=1 φs, ℓ≥1 . (8) For any time horizon T, define the final epoch K = K(T) as the epoch where T lies in, that is τK−1 < T ≤τK. Then, our algorithm OG-LUCB-R is proposed in Algorithm 4. The following theorem shows that the O(log T) ϵ-quasi regret still holds, with a slight blowup of the constant hidden in the big O notation (For completeness, the explicit constant before log T can be found in Theorem D.7 of the supplementary material). Theorem 4.3. Given any ϵ ≥0. Use φℓand τℓdefined in (8), and function RQ,σ(T) defined in Theorem 4.2. In Algorithm OG-LUCB-R, suppose σ(ℓ) = ⟨S(ℓ) 0 , S(ℓ) 1 , · · · , S(ℓ) m(ℓ)⟩is the sequence selected by the end of ℓ-th epoch of OG-LUCBϵ,δ, where m(ℓ) is its length. For any time T, denote final epoch as K = K(T) such that τK−1 < T ≤τK, and the ϵ-quasi regret satisfies that RQ(T) ≤ PK ℓ=1 RQ,σ(ℓ)(φℓ) = O W m∆max max{∆2,ϵ2} log T , where ∆is the minimum unit gap. 5 Lower Bound on the Greedy Regret Consider a problem of selecting one element each from m bandit instances, and the player sequentially collects prize at every phase. For simplicity, we call it the prize-collecting problem, which is defined as follows: For each bandit instance i = 1, 2, . . . , m, denote set Ei = {ei,1, ei,2, . . . , ei,W } of size W. The accessible set system is defined as (E, F), where E = Sm i=1 Ei, F = ∪m i=1Fi ∪{∅}, and Fi = {S ⊆E : |S| = i, ∀k : 1 ≤k ≤i, |S∩Ek| = 1}. The reward function f : F×Ω→[0, m] is non-decreasing in the first parameter, and the form of f is unknown to the player. Let minimum unit gap ∆:= min f(g∗ S|S) −f(e|S) : ∀S ∈F, ∀e ∈N−(S) > 0, where its value is also unknown to the player. The objective of the player is to minimize the greedy regret. Denote the greedy sequence as σG = ⟨G0, G1, · · · , Gm⟩, and the greedy arms as AG = {g∗ Gi−1|Gi−1 : ∀i = 1, 2, · · · , W}. We say an algorithm is consistent, if the sum of playing all arms a ∈A \ AG is in o(T η), for any η > 0, i.e., E[P a∈A\AG NT (a)] = o(T η). Theorem 5.1. For any consistent algorithm, there exists a problem instance of the prize-collecting problem, as time T tends to ∞, for any minimum unit gap ∆∈(0, 1 4), such that ∆2 ≥ 2 3W ξm−1 for some constant ξ ∈(0, 1), the greedy regret satisfies that RG(T) = Ω mW ln T ∆2 . We remark that the detailed problem instance and the greedy regret can be found in Theorem E.2 of the supplementary material. Furthermore, we may also restrict the maximum gap ∆max to Θ(1), and the lower bound RG(T) = Ω( mW ∆max ln T ∆2 ), for any sufficiently large T. For the upper bound, OGUCB (Theorem 3.1) gives that RG(T) = O( mW ∆max ∆2 log T), Thus, our upper bound of OG-UCB matches the lower bound within a constant factor. Acknowledgments Jian Li was supported in part by the National Basic Research Program of China grants 2015CB358700, 2011CBA00300, 2011CBA00301, and the National NSFC grants 61202009, 61033001, 61361136003. 8 References [1] J.-Y. Audibert and S. Bubeck. Best arm identification in multi-armed bandits. In COLT, 2010. [2] J.-Y. Audibert, S. Bubeck, and G. Lugosi. Minimax policies for combinatorial prediction games. arXiv preprint arXiv:1105.4871, 2011. [3] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002. [4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. The nonstochastic multiarmed bandit problem. SIAM Journal on Computing, 32(1):48–77, 2002. [5] A. Bj¨orner and G. M. Ziegler. Introduction to greedoids. Matroid applications, 40:284–357, 1992. [6] S. Bubeck and N. Cesa-Bianchi. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. arXiv preprint arXiv:1204.5721, 2012. [7] S. Bubeck, R. Munos, and G. Stoltz. Pure exploration in finitely-armed and continuous-armed bandits. Theoretical Computer Science, 412(19):1832–1852, 2011. [8] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. Journal of Computer and System Sciences, 78 (5):1404–1422, 2012. [9] S. Chen, T. Lin, I. King, M. R. Lyu, and W. Chen. Combinatorial pure exploration of multi-armed bandits. In NIPS, 2014. [10] W. Chen, Y. Wang, and Y. Yuan. Combinatorial multi-armed bandit: General framework and applications. In ICML, 2013. [11] V. Chvatal. A greedy heuristic for the set-covering problem. Mathematics of operations research, 4(3): 233–235, 1979. [12] V. Gabillon, B. Kveton, Z. Wen, B. Eriksson, and S. Muthukrishnan. Adaptive submodular maximization in bandit setting. In NIPS. 2013. [13] Y. Gai, B. Krishnamachari, and R. Jain. Learning multiuser channel allocations in cognitive radio networks: A combinatorial multi-armed bandit formulation. In DySPAN. IEEE, 2010. [14] A. Garivier and O. Capp´e. The kl-ucb algorithm for bounded stochastic bandits and beyond. arXiv preprint arXiv:1102.2490, 2011. [15] P. Helman, B. M. Moret, and H. D. Shapiro. An exact characterization of greedy structures. SIAM Journal on Discrete Mathematics, 6(2):274–283, 1993. [16] S. Kalyanakrishnan, A. Tewari, P. Auer, and P. Stone. Pac subset selection in stochastic multi-armed bandits. In ICML, 2012. [17] D. Kempe, J. Kleinberg, and ´E. Tardos. Maximizing the spread of influence through a social network. In SIGKDD, 2003. [18] B. Korte and L. Lov´asz. Greedoids and linear objective functions. SIAM Journal on Algebraic Discrete Methods, 5(2):229–238, 1984. [19] J. B. Kruskal. On the shortest spanning subtree of a graph and the traveling salesman problem. Proceedings of the American Mathematical society, 7(1):48–50, 1956. [20] B. Kveton, Z. Wen, A. Ashkan, H. Eydgahi, and B. Eriksson. Matroid bandits: Fast combinatorial optimization with learning. arXiv preprint arXiv:1403.5045, 2014. [21] B. Kveton, Z. Wen, A. Ashkan, and C. Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. arXiv preprint arXiv:1410.0949, 2014. [22] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in applied mathematics, 6(1):4–22, 1985. [23] T. Lin, B. Abrahao, R. Kleinberg, J. Lui, and W. Chen. Combinatorial partial monitoring game with linear feedback and its applications. In ICML, 2014. [24] B. Mirzasoleiman, A. Badanidiyuru, A. Karbasi, J. Vondrak, and A. Krause. Lazier than lazy greedy. In Proc. Conference on Artificial Intelligence (AAAI), 2015. [25] R. C. Prim. Shortest connection networks and some generalizations. Bell system technical journal, 36(6): 1389–1401, 1957. [26] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In NIPS, 2009. 9 | 2015 | 5 |
5,939 | b-bit Marginal Regression Martin Slawski Department of Statistics and Biostatistics Department of Computer Science Rutgers University martin.slawski@rutgers.edu Ping Li Department of Statistics and Biostatistics Department of Computer Science Rutgers University pingli@stat.rutgers.edu Abstract We consider the problem of sparse signal recovery from m linear measurements quantized to b bits. b-bit Marginal Regression is proposed as recovery algorithm. We study the question of choosing b in the setting of a given budget of bits B = m · b and derive a single easy-to-compute expression characterizing the trade-off between m and b. The choice b = 1 turns out to be optimal for estimating the unit vector corresponding to the signal for any level of additive Gaussian noise before quantization as well as for adversarial noise. For b ≥2, we show that Lloyd-Max quantization constitutes an optimal quantization scheme and that the norm of the signal can be estimated consistently by maximum likelihood by extending [15]. 1 Introduction Consider the common compressed sensing (CS) model yi = ⟨ai, x∗⟩+ σεi, i = 1, . . . , m, or equivalently y = Ax∗+ σε, y = (yi)m i=1, A = (Aij)m,n i,j=1, {ai = (Aij)n j=1}m i=1, ε = (εi)m i=1, (1) where the {Aij} and the {εi} are i.i.d. N(0, 1) (i.e. standard Gaussian) random variables, the latter of which will be referred to by the term “additive noise” and accordingly σ > 0 as “noise level”, and x∗∈Rn is the signal of interest to be recovered given (A, y). Let s = ∥x∗∥0 := |S(x∗)|, where S(x∗) = {j : |x∗ j| > 0}, be the ℓ0-norm of x∗(i.e. the cardinality of its support S(x∗)). One of the celebrated results in CS is that accurate recovery of x∗is possible as long as m ≳s log n, and can be carried out by several computationally tractable algorithms e.g. [3, 5, 21, 26, 29]. Subsequently, the concept of signal recovery from an incomplete set (m < n) of linear measurements was developed further to settings in which only coarsely quantized versions of such linear measurements are available, with the extreme case of single-bit measurements [2, 8, 11, 22, 23, 28, 16]. More generally, one can think of b-bit measurements, b ∈{1, 2, . . .}. Assuming that one is free to choose b given a fixed budget of bits B = m · b gives rise to a trade-off between m and b. An optimal balance of these two quantities minimizes the error in recovering the signal. Such optimal trade-off depends on the quantization scheme, the noise level, and the recovery algorithm. This trade-off has been considered in previous CS literature [13]. However, the analysis therein concerns an oracle-assisted recovery algorithm equipped with knowledge of S(x∗) which is not fully realistic. In [9] a specific variant of Iterative Hard Thresholding [1] for b-bit measurements is considered. It is shown via numerical experiments that choosing b ≥2 can in fact achieve improvements over b = 1 at the level of the total number of bits required for approximate signal recovery. On the other hand, there is no analysis supporting this observation. Moreover, the experiments in [9] only concern a noiseless setting. Another approach is to treat quantization as additive error and to perform signal recovery by means of variations of recovery algorithms for infinite-precision CS [10, 14, 18]. In this line of research, b is assumed to be fixed and a discussion of the aforementioned trade-off is missing. In the present paper we provide an analysis of compressed sensing from b-bit measurements using a specific approach to signal recovery which we term b-bit Marginal Regression. This approach builds on a method for one-bit compressed sensing proposed in an influential paper by Plan and Vershynin [23] which has subsequently been refined in several recent works [4, 24, 28]. As indicated by the name, b-bit Marginal Regression can be seen as a quantized version of Marginal Regression, a simple 1 yet surprisingly effective approach to support recovery that stands out due to its low computational cost, requiring only a single matrix-vector multiplication and a sorting operation [7]. Our analysis yields a precise characterization of the above trade-off involving m and b in various settings. It turns out that the choice b = 1 is optimal for recovering the normalized signal x∗ u = x∗/∥x∗∥2, under additive Gaussian noise as well as under adversarial noise. It is shown that the choice b = 2 additionally enables one to estimate ∥x∗∥2, while being optimal for recovering x∗ u for b ≥2. Hence for the specific recovery algorithm under consideration, it does not pay off to take b > 2. Furthermore, once the noise level is significantly high, b-bit Marginal Regression is empirically shown to perform roughly as good as several alternative recovery algorithms, a finding suggesting that in high-noise settings taking b > 2 does not pay off in general. As an intermediate step in our analysis, we prove that Lloyd-Max quantization [19, 20] constitutes an optimal b-bit quantization scheme in the sense that it leads to a minimization of an upper bound on the reconstruction error. Notation: We use [d] = {1, . . . , d} and S(x) for the support of x ∈Rn. x ⊙x′ = (xj · x′ j)n j=1. I(P) is the indicator function of expression P. The symbol ∝means “up to a positive universal constant”. Supplement: Proofs and additional experiments can be found in the supplement. 2 From Marginal Regression to b-bit Marginal Regression Some background on Marginal Regression. It is common to perform sparse signal recovery by solving an optimization problem of the form min x 1 2m∥y −Ax∥2 2 + γ 2 P(x), γ ≥0, (2) where P is a penalty term encouraging sparse solutions. Standard choices for P are P(x) = ∥x∥0, which is computationally not feasible in general, its convex relaxation P(x) = ∥x∥1 or non-convex penalty terms like SCAD or MCP that are more amenable to optimization than the ℓ0-norm [27]. Alternatively P can as well be used to enforce a constraint by setting P(x) = ιC(x), where ιC(x) = 0 if x ∈C and +∞otherwise, with C = {x ∈Rn : ∥x∥0 ≤s} or C = {x ∈Rn : ∥x∥1 ≤r} being standard choices. Note that (2) is equivalent to the optimization problem min x −⟨η, x⟩+ 1 2x⊤A⊤A m x + γ 2 P(x), where η = A⊤y m . Replacing A⊤A/m by E[A⊤A/m] = I (recall that the entries of A are i.i.d. N(0, 1)), we obtain min x −⟨η, x⟩+ 1 2∥x∥2 2 + γ 2 P(x), η = A⊤y m , (3) which tends to be much simpler to solve than (2) as the first two terms are separable in the components of x. For the choices of P mentioned above, we obtain closed form solutions: P(x) = ∥x∥0 : bxj = ηjI(|ηj| ≥γ1/2) P(x) = ∥x∥1 : bxj = (|ηj| −γ)+ sign(ηj), P(x) = ιx:∥x∥0≤s : bxj = ηjI(|ηj| ≥|η(s)|) P(x) = ιx:∥x∥1≤r : bxj = (|ηj| −γ∗)+ sign(ηj) (4) for j ∈[n], where + denotes the positive part and |η(s)| is the sth largest entry in η in absolute magnitude and γ∗= min{γ ≥0 : Pn j=1(|ηj| −γ)+ ≤r}. In other words, the estimators are hardrespectively soft-thresholded versions of ηj = A⊤ j y/m which are essentially equal to the univariate (or marginal) regression coefficients θj = A⊤ j y/∥Aj∥2 2 in the sense that ηj = θj(1 + OP(m−1)), j ∈[n], hence the term “marginal regression”. In the literature, it is the estimator in the left half of (4) that is popular [7], albeit as a means to infer the support of x∗rather than x∗itself. Under (2) the performance with respect to signal recovery can still be reasonable in view of the statement below. Proposition 1. Consider model (1) with x∗̸= 0 and the Marginal Regression estimator bx defined component-wise by bxj = ηjI(|ηj| ≥|η(s)|), j ∈[n], where η = A⊤y/m. Then there exists positive constants c, C > 0 such that with probability at least 1 −cn−1 ∥bx −x∗∥2 ∥x∗∥2 ≤C ∥x∗∥2 + σ ∥x∗∥2 r s log n m . (5) In comparison, the relative ℓ2-error of more sophisticated methods like the lasso scales as O({σ/∥x∗∥2} p s log(n)/m) which is comparable to (5) once σ is of the same order of magnitude as ∥x∗∥2. Marginal Regression can also be interpreted as a single projected gradient iteration 2 from 0 for problem (2) with P = ιx:∥x∥0≤s. Taking more than one projected gradient iteration gives rise to a popular recovery algorithm known as Iterative Hard Thresholding (IHT, [1]). Compressed sensing with non-linear observations and the method of Plan & Vershynin. As a generalization of (1) one can consider measurements of the form yi = Q(⟨ai, x∗⟩+ σεi), i ∈[m] (6) for some map Q. Without loss generality, one may assume that ∥x∗∥2 = 1 as long as x∗̸= 0 (which is assumed in the sequel) by defining Q accordingly. Plan and Vershynin [23] consider the following optimization problem for recovering x∗, and develop a framework for analysis that covers even more general measurement models than (6). The proposed estimator minimizes min x:∥x∥2≤1,∥x∥1≤√s −⟨η, x⟩, η = A⊤y/m. (7) Note that the constraint set {x : ∥x∥2 ≤1, ∥x∥1 ≤√s} contains {x : ∥x∥2 ≤1, ∥x∥0 ≤s}. The authors prefer the former because it is suited for approximately sparse signals as well and second because it is convex. However, the optimization problem with sparsity constraint is easy to solve: min x:∥x∥2≤1,∥x∥0≤s −⟨η, x⟩, η = A⊤y/m. (8) Lemma 1. The solution of problem (8) is given by bx = ex/∥ex∥2, exj = ηjI(|ηj| ≥|η(s)|), j ∈[n]. While this is elementary we state it as a separate lemma as there has been some confusion in the existing literature. In [4] the same solution is obtained after (unnecessarily) convexifying the constraint set, which yields the unit ball of the so-called s-support norm. In [24] a family of concave penalty terms including the SCAD and MCP is proposed in place of the cardinality constraint. However, in light of Lemma 1, the use of such penalty terms lacks motivation. The minimization problem (8) is essentially that of Marginal Regression (3) with P = ιx:∥x∥0≤s, the only difference being that the norm of the solution is fixed to one. Note that the Marginal Regression estimator is equi-variant w.r.t. re-scaling of y, i.e. for a · y with a > 0, bx changes to abx. In addition, let α, β > 0 and define bx(α) and bx[β] as the minimizers of the optimization problems min x:∥x∥0≤s −⟨η, x⟩+ α 2 ∥x∥2 2, min x:∥x∥2≤β,∥x∥0≤s −⟨η, x⟩. (9) It is not hard to verify that bx(α)/∥bx(α)∥2 = bx[β]/∥bx[β]∥2 = bx[1]. In summary, for estimating the direction x∗ u = x∗/∥x∗∥2 it does not matter if a quadratic term in the objective or an ℓ2-norm constraint is used. Moreover, estimation of the ’scale’ ψ∗= ∥x∗∥2 and the direction can be separated. Adopting the framework in [23], we provide a straightforward bound on the ℓ2-error of bx minimizing (8). To this end we define two quantities which will be of central interest in subsequent analysis. λ = E[g θ(g)], g ∼N(0, 1), where θ is defined by E[y1|a1] = θ(⟨a1, x∗⟩) Ψ = inf{C > 0 : P{max1≤j≤n |ηj −E[ηj]| ≤C p log(n)/m} ≥1 −1/n.}. (10) The quantity λ concerns the deterministic part of the analysis as it quantifies the distortion of the linear measurements under the map Q, while Ψ is used to deal with the stochastic part. The definition of Ψ is based on the usual tail bound for the maximum of centered sub-Gaussian random variables. In fact, as long as Q has bounded range, Gaussianity of the {Aij} implies that the {ηj −E[ηj]}n j=1 are zero-mean sub-Gaussian. Accordingly, the constant Ψ is proportional to the sub-Gaussian norm of the {ηj −E[ηj]}n j=1, cf. [25]. Proposition 2. Consider model (6) s.t. ∥x∗∥2 = 1 and (10). Suppose that λ > 0 and denote by bx the minimizer of (8). Then with probability at least 1 −1/n, it holds that ∥x∗−bx∥2 ≤2 √ 2 Ψ λ r s log n m . (11) So far s has been assumed to be known. If that is not the case, s can be estimated as follows. Proposition 3. In the setting of Proposition 2, consider bs = |{j : |ηj| > Ψ p log(n)/m}| and bx as the minimizer of (8) with s replaced by bs. Then with probability at least 1 −1/n, S(bx) ⊆S(x∗) (i.e. no false positive selection). Moreover, if min j∈S(x∗) |x∗ j| > (2Ψ/λ) p log(n)/m, one has S(bx) = S(x∗). (12) 3 b-bit Marginal Regression. b-bit quantized measurements directly fit into the non-linear observation model (6). Here the map Q represents a quantizer that partitions R+ into K = 2b−1 bins {Rk}K k=1 given by distinct thresholds t = (t1, . . . , tK−1)⊤(in increasing order) and t0 = 0, tK = +∞such that R1 = [t0, t1), . . . , RK = [tK−1, tK). Each bin is assigned a distinct representative from M = {µ1, . . . , µK} (in increasing order) so that Q : R →−M ∪M is defined by z 7→Q(z) = sign(z) PK k=1 µkI(|z| ∈Rk). Expanding model (6) accordingly, we obtain yi = sign(⟨ai, x∗⟩+ σεi) PK k=1 µkI( |(⟨ai, x∗⟩+ σεi)| ∈Rk) = sign(⟨ai, x∗ u⟩+ τεi) PK k=1 µkI( |(⟨ai, x∗ u⟩+ τεi)| ∈Rk/ψ∗), i ∈[m], where ψ∗= ∥x∗∥2, x∗ u = x∗/ψ∗and τ = σ/ψ∗. Thus the scale ψ∗of the signal can be absorbed into the definition of the bins respectively thresholds which should be proportional to ψ∗. We may thus again fix ψ∗= 1 and in turn x∗= x∗ u, σ = τ w.l.o.g. for the analysis below. Estimation of ψ∗ separately from x∗ u will be discussed in an extra section. 3 Analysis In this section we study in detail the central question of the introduction. Suppose we have a fixed budget B of bits available and are free to choose the number of measurements m and the number of bits per measurement b subject to B = m · b such that the ℓ2-error ∥bx −x∗∥2 of b-bit Marginal Regression is as small as possible. What is the optimal choice of (m, b)? In order to answer this question, let us go back to the error bound (11). That bound applies to b-bit Marginal Regression for any choice of b and varies with λ = λb and Ψ = Ψb, both of which additionally depend on σ, the choice of the thresholds t and the representatives µ. It can be shown that the dependence of (11) on the ratio Ψ/λ is tight asymptotically as m →∞. Hence it makes sense to compare two different choices b and b′ in terms of the ratio of Ωb = Ψb/λb and Ωb′ = Ψb′/λb′. Since the bound (11) decays with √m, for b′-bit measurements, b′ > b, to improve over b-bit measurements with respect to the total #bits used, it is then required that Ωb/Ωb′ > p b′/b. The route to be taken is thus as follows: we first derive expressions for λb and Ψb and then minimize the resulting expression for Ωb w.r.t. the free parameters t and µ. We are then in position to compare Ωb/Ωb′ for b ̸= b′. Evaluating λb = λb(t, µ). Below, ⊙denotes the entry-wise multiplication between vectors. Lemma 2. We have λb(t, µ) = ⟨α(t), E(t) ⊙µ⟩/(1 + σ2), where α(t) = (α1(t), . . . , αK(t))⊤, αk(t) = P {|eg| ∈Rk(t)} , eg ∼N(0, 1 + σ2), k ∈[K], E(t) = (E1(t), . . . , EK(t))⊤, Ek(t) = E[eg|eg ∈Rk(t)], eg ∼N(0, 1 + σ2), k ∈[K]. Evaluating Ψb = Ψb(t, µ). Exact evaluation proves to be difficult. We hence resort to an analytically more tractable approximation which is still sufficiently accurate as confirmed by experiments. Lemma 3. As |x∗ j| →0, j = 1, . . . , n, and as m →∞, we have Ψb(t, µ) ∝ p ⟨α(t), µ ⊙µ⟩. Note that the proportionality constant (not depending on b) in front of the given expression does not need to be known as it cancels out when computing ratios Ωb/Ωb′. The asymptotics |x∗ j| →0, j ∈ [n], is limiting but still makes sense for s growing with n (recall that we fix ∥x∗∥2 = 1 w.l.o.g.). Optimal choice of t and µ. It turns that the optimal choice of (t, µ) minimizing Ψb/λb coincides with the solution of an instance of the classical Lloyd-Max quantization problem [19, 20] stated below. Let h be a random variable with finite variance and Q the quantization map from above. min t,µ E[{h −Q(h; t, µ)}2] = min t,µ E[{h −sign(h) PK k=1 µkI(|h| ∈Rk(t) )}2]. (13) Problem (13) can be seen as a one-dimensional k-means problem at the population level, and it is solved in practice by an alternating scheme similar to that used for k-means. For h from a logconcave distribution (e.g. Gaussian) that scheme can be shown to deliver the global optimum [12]. Theorem 1. Consider the minimization problem mint,µ Ψb(t, µ)/λb(t, µ). Its minimizer (t∗, µ∗) equals that of the Lloyd-Max problem (13) for h ∼N(0, 1 + σ2). Moreover, Ωb(t∗, µ∗) = Ψb(t∗, µ∗)/λb(t∗, µ∗) ∝ p (σ2 + 1)/λb,0(t∗ 0, µ∗ 0), where λb,0(t∗ 0, µ∗ 0) denotes the value of λb for σ = 0 evaluated at (t∗ 0, µ∗ 0), the choice of (t, µ) minimizing Ωb for σ = 0. 4 Regarding the choice of (t, µ) the result of Theorem 1 may not come as a suprise as the entries of y are i.i.d. N(0, 1 + σ2). It is less immediate though that this specific choice can also be motivated as the one leading to the minimization of the error bound (11). Furthermore, Theorem 1 implies that the relative performance of b- and b′-bit measurements does not depend on σ as long as the respective optimal choice of (t, µ) is used, which requires σ to be known. Theorem 1 provides an explicit expression for Ωb that is straightforward to compute. The following table lists ratios Ωb/Ωb′ for selected values of b and b′. b = 1, b′ = 2 b = 2, b′ = 3 b = 3, b′ = 4 Ωb/Ωb′: 1.178 1.046 1.013 required for b′ ≫b: √ 2 ≈1.414 p 3/2 ≈1.225 p 4/3 ≈1.155 These figures suggests that the smaller b, the better the performance for a given budget of bits B. Beyond additive noise. Additive Gaussian noise is perhaps the most studied form of perturbation, but one can of course think of numerous other mechanisms whose effect can be analyzed on the basis of the same scheme used for additive noise as long as it is feasible to obtain the corresponding expressions for λ and Ψ. We here do so for the following mechanisms acting after quantization. (I) Random bin flip. For i ∈[m]: with probability 1 −p, yi remains unchanged. With probability p, yi is changed to an element from (−M ∪M) \ {yi} uniformly at random. (II) Adversarial bin flip. For i ∈[m]: Write yi = qµk for q ∈{−1, 1} and µk ∈M. With probability 1 −p, yi remains unchanged. With probability p, yi is changed to −qµK. Note that for b = 1, (I) and (II) coincide (sign flip with probability p). Depending on the magnitude of p, the corresponding value λ = λb,p may even be negative, which is unlike the case of additive noise. Recall that the error bound (11) requires λ > 0. Borrowing terminology from robust statistics, we consider ¯pb = min{p : λb,p ≤0} as the breakdown point, i.e. the (expected) proportion of contaminated observations that can still be tolerated so that (11) continues to hold. Mechanism (II) produces a natural counterpart of gross corruptions in the standard setting (1). It can be shown that among all maps −M ∪M →−M ∪M applied randomly to the observations with a fixed probability, (II) maximizes the ratio Ψ/λ, hence the attribute “adversarial”. In Figure 1 we display Ψb,p/λb,p for b ∈{1, 2, 3, 4} for both (I) and (II). The table below lists the corresponding breakdown points. For simplicity, (t, µ) are not optimized but set to the optimal (in the sense of Lloyd-Max) choice (t∗ 0, µ∗ 0) in the noiseless case. The underlying derivations can be found in the supplement. (I) b = 1 b = 2 b = 3 b = 4 (II) b = 1 b = 2 b = 3 b = 4 ¯pb 1/2 3/4 7/8 15/16 ¯pb 1/2 0.42 0.36 0.31 Figure 1 and the table provide one more argument in favour of one-bit measurements as they offer better robustness vis-`a-vis adversarial corruptions. In fact, once the fraction of such corruptions reaches 0.2, b = 1 performs best −on the measurement scale. For the milder corruption scheme (I), b = 2 turns out to the best choice for significant but moderate p. 0 0.1 0.2 0.3 0.4 0.5 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 fraction of bin flips log10(Ψ/λ) b = 1 b = 2 b = 3 / 4 (~overlap) 0 0.1 0.2 0.3 0.4 0.5 0 0.5 1 1.5 2 2.5 fraction of gross corruptions log10(Ψ/λ) b = 1 b = 2 b = 3 b = 4 Figure 1: Ψb,p/λb,p (log10-scale), b ∈{1, 2, 3, 4}, p ∈[0, 0.5] for mechanisms (I, L) and (II, R). 4 Scale estimation In Section 2, we have decomposed x∗= x∗ uψ∗into a product of a unit vector x∗ u and a scale parameter ψ∗> 0. We have pointed out that x∗ u can be estimated by b-bit Marginal Regression 5 separately from ψ∗since the latter can be absorbed into the definition of the bins {Rk}. Accordingly, we may estimate x∗as bx = bxu bψ with bxu and bψ estimating x∗ u and ψ∗, respectively. We here consider the maximum likelihood estimator (MLE) for ψ∗, by following [15] which studied the estimation of the scale parameter for the entire α-stable family of distributions. The work of [15] was motivated by a different line of one scan 1-bit CS algorithm [16] based on α-stable designs [17]. First, we consider the case σ = 0, so that the {yi} are i.i.d. N(0, (ψ∗)2). The likelihood function is L(ψ) = m Y i=1 K X k=1 I(yi ∈Rk) P(|yi| ∈Rk) = K Y k=1 {2(Φ(tk/ψ) −Φ(tk−1/ψ))}mk, (14) where mk = |{i : |yi| ∈Rk}|, k ∈[K], and Φ denotes the standard Gaussian cdf. Note that for K = 1, L(ψ) is constant (i.e. does not depend on ψ) which confirms that for b = 1, it is impossible to recover ψ∗. For K = 2 (i.e. b = 2), the MLE has a simple a closed form expression given by bψ = t1/Φ−1(0.5(1 + m1/m)). The following tail bound establishes fast convergence of bψ to ψ∗. Proposition 4. Let ε ∈(0, 1) and c = 2{φ′(t1/ψ∗)}2, where φ′ denotes the derivative of the standard Gaussian pdf. With probability at least 1 −2 exp(−cmε2), we have | bψ/ψ∗−1| ≤ε. The exponent c is maximized for t1 = ψ∗and becomes smaller as t1/ψ∗moves away from 1. While scale estimation from 2-bit measurements is possible, convergence can be slow if t1 is not well chosen. For b ≥3, convergence can be faster but the MLE is not available in closed form [15]. We now turn to the case σ > 0. The MLE based on (14) is no longer consistent. If x∗ u is known then the joint likelihood of for (ψ∗, σ) is given by L(ψ, eσ) = m Y i=1 Φ ui −ψ ⟨ai, x∗ u⟩ eσ −Φ li −ψ ⟨ai, x∗ u⟩ eσ , (15) where [li, ui] denotes the interval the i-th observation is contained in before quantization, i ∈[m]. It is not clear to us whether the likelihood is log-concave, which would ensure that the global optimum can be obtained by convex programming. Empirically, we have not encountered any issue with spurious local minima when using ψ = 0 and eσ as the MLE from the noiseless case as starting point. The only issue with (15) we are aware of concerns the case in which there exists ψ so that ψ ⟨ai, x∗ u⟩∈[li, ui], i ∈[m]. In this situation, the MLE for σ equals zero and the MLE for ψ may not be unique. However, this is a rather unlikely scenario as long as there is a noticeable noise level. As x∗ u is typically unknown, we may follow the plug-in principle, replacing x∗ u by an estimator bxu. 5 Experiments We here provide numerical results supporting/illustrating some of the key points made in the previous sections. We also compare b-bit Marginal Regression to alternative recovery algorithms. Setup. Our simulations follow model (1) with n = 500, s ∈{10, 20, . . ., 50}, σ ∈{0, 1, 2} and b ∈{1, 2}. Regarding x∗, the support and its signs are selected uniformly at random, while the absolute magnitude of the entries corresponding to the support are drawn from the uniform distribution on [β, 2β], where β = f · (1/λ1,σ) p log(n)/m and m = f 2(1/λ1,σ)2s log n with f ∈{1.5, 3, 4.5, . . ., 12} controlling the signal strength. The resulting signal is then normalized to unit 2-norm. Before normalization, the norm of the signal lies in [1, √ 2] by construction which ensures that as f increases the signal strength condition (12) is satisfied with increasing probability. For b = 2, we use Lloyd-Max quantization for a N(0, 1)-random variable which is optimal for σ = 0, but not for σ > 0. Each possible configuration for s, f and σ is replicated 20 times. Due to space limits, a representative subset of the results is shown; the rest can be found in the supplement. Empirical verification of the analysis in Section 3. The experiments reveal that what is predicted by the analysis of the comparison of the relative performance of 1-bit and 2-bit measurements for estimating x∗closely agrees with what is observed empirically, as can be seen in Figure 2. Estimation of the scale and the noise level. Figure 3 suggests that the plug-in MLE for (ψ∗= ∥x∗∥2, σ) is a suitable approach, at least as long as ψ∗/σ is not too small. For σ = 2, the plug-in MLE for ψ∗appears to have a noticeable bias as it tends to 0.92 instead of 1 for increasing f (and thus increasing m). Observe that for σ = 0, convergence to the true value 1 is smaller as for σ = 1, 6 0.5 1 1.5 2 2.5 3 3.5 4 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 σ =0, s = 10 f log2(error) b = 1 b = 2 required improvement predicted improvement 0.5 1 1.5 2 2.5 3 3.5 4 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 σ =0, s = 50 f log2(error) b = 1 b = 2 required improvement predicted improvement 0.5 1 1.5 2 2.5 3 3.5 4 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 σ =1, s = 50 f log2(error) b = 1 b = 2 required improvement predicted improvement 0.5 1 1.5 2 2.5 3 3.5 4 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 σ =2, s = 50 f log2(error) b = 1 b = 2 required improvement predicted improvement Figure 2: Average ℓ2-estimation errors ∥x∗−bx∥2 for b = 1 and b = 2 on the log2-scale in dependence of the signal strength f. The curve ’predicted improvement’ (of b = 2 vs. b = 1) is obtained by scaling the ℓ2-estimation error by the factor predicted by the theory of Section 3. Likewise the curve ’required improvement’ results by scaling the error of b = 1 by 1/ √ 2 and indicates what would be required by b = 2 to improve over b = 1 at the level of total #bits. 0.5 1 1.5 2 2.5 3 3.5 4 0.86 0.88 0.9 0.92 0.94 0.96 0.98 1 1.02 s = 50 f estimated norm of x* σ = 0 σ = 1 σ = 2 0.5 1 1.5 2 2.5 3 3.5 4 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 s = 50 f estimated noise level σ = 0 σ = 1 σ = 2 Figure 3: Estimation of ψ = ∥x∗∥2 (here 1) and σ. The curves depict the average of the plug-in MLE discussed in Section 4 while the bars indicate ±1 standard deviation. while σ is over-estimated (about 0.2) for small f. The above two issues are presumably a plug-in effect, i.e. a consequence of using bxu in place of x∗ u. b-bit Marginal Regression and alternative recovery algorithms. We compare the ℓ2-estimation error of b-bit Marginal Regression to several common recovery algorithms. Compared to apparently more principled methods which try to enforce agreement of Q(y) and Q(Abx) w.r.t. the Hamming distance (or a surrogate thereof), b-bit Marginal Regression can be seen as a crude approach as it is based on maximizing the inner product between y and Ax. One may thus expect that its performance is inferior. In summary, our experiments confirm that this is true in low-noise settings, but not so if the noise level is substantial. Below we briefly present the alternatives that we consider. Plan-Vershynin: The approach in [23] based on (7) which only differs in that the constraint set results from a relaxation. As shown in Figure 4 the performance is similar though slightly inferior. IHT-quadratic: Standard Iterative Hard Thresholding based on quadratic loss [1]. As pointed out above, b-bit Marginal Regression can be seen as one-step version of Iterative Hard Thresholding. 7 IHT-hinge (b = 1): The variant of Iterative Hard Threshold for binary observations using a hinge loss-type loss function as proposed in [11]. SVM (b = 1): Linear SVM with squared hinge loss and an ℓ1-penalty, implemented in LIBLINEAR [6]. The cost parameter is chosen from 1/√m log m.{2−3, 2−2, . . . , 23} by 5-fold cross-validation. IHT-Jacques (b = 2): A variant of Iterative Hard Threshold for quantized observations based on a specific piecewiese linear loss function [9]. SVM-type (b = 2): This approach is based on solving the following convex optimization problem: minx,{ξi} γ∥x∥1 + Pm i=1 ξi subject to li −ξi ≤⟨ai, x⟩≤ui + ξi, ξi ≥0, i ∈[m], where [li, ui] is the bin observation i is assigned to. The essential idea is to enforce consistency of the observed and predicted bin assignments up to slacks {ξi} while promoting sparsity of the solution via an ℓ1penalty. The parameter γ is chosen from √m log m·{2−10, 2−9, . . . , 23} by 5-fold cross-validation. Turning to the results as depicted by Figure 4, the difference between a noiseless (σ = 0) and heavily noisy setting (σ = 2) is perhaps most striking. σ = 0: both IHT variants significantly outperform b-bit Marginal Regression. By comparing errors for IHT, b = 2 can be seen to improve over b = 1 at the level of the total # bits. σ = 2: b-bit Marginal Regression is on par with the best performing methods. IHT-quadratic for b = 2 only achieves a moderate reduction in error over b = 1, while IHT-hinge is supposedly affected by convergence issues. Overall, the results suggest that a setting with substantial noise favours a crude approach (low-bit measurements and conceptually simple recovery algorithms). b = 1 0.5 1 1.5 2 2.5 3 3.5 4 −9 −8 −7 −6 −5 −4 −3 −2 σ =0, s = 50 f log2(error) Marginal Plan−Vershynin IHT−quadratic IHT−hinge SVM 0.5 1 1.5 2 2.5 3 3.5 4 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −1 −0.5 0 σ =2, s = 50 f log2(error) Marginal Plan−Vershynin IHT−quadratic IHT−hinge SVM b = 2 0.5 1 1.5 2 2.5 3 3.5 4 −10 −9 −8 −7 −6 −5 −4 −3 −2 σ =0, s = 50 f log2(error) Marginal Plan−Vershynin IHT−quadratic IHT−Jacques SVM−type 0.5 1 1.5 2 2.5 3 3.5 4 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 σ =2, s = 50 f log2(error) Marginal Plan−Vershynin IHT−quadratic IHT−Jacques SVM−type Figure 4: Average ℓ2-estimation errors for several recovery algorithms on the log2-scale in dependence of the signal strength f . We contrast σ = 0 (L) vs. σ = 2 (R), b = 1 (T) vs. b = 2 (B). 6 Conclusion Bridging Marginal Regression and a popular approach to 1-bit CS due to Plan & Vershynin, we have considered signal recovery from b-bit quantized measurements. The main finding is that for b-bit Marginal Regression it is not beneficial to increase b beyond 2. A compelling argument for b = 2 is the fact that the norm of the signal can be estimated unlike the case b = 1. Compared to high-precision measurements, 2-bit measurements also exhibit strong robustness properties. It is of interest if and under what circumstances the conclusion may differ for other recovery algorithms. Acknowledgement. This work is partially supported by NSF-Bigdata-1419210, NSF-III-1360971, ONR-N00014-13-1-0764, and AFOSR-FA9550-13-1-0137. 8 References [1] T. Blumensath and M. Davies. Iterative hard thresholding for compressed sensing. Applied and Computational Harmonic Analysis, 27:265–274, 2009. [2] P. Boufounos and R. Baraniuk. 1-bit compressive sensing. In Information Science and Systems, 2008. [3] E. Candes and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. The Annals of Statistics, 35:2313–2351, 2007. [4] S. Chen and A. Banerjee. One-bit Compressed Sensing with the k-Support Norm. In AISTATS, 2015. [5] D. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52:1289–1306, 2006. [6] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008. [7] C. Genovese, J. Jin, L. Wasserman, and Z. Yao. A Comparison of the Lasso and Marginal Regression. Journal of Machine Learning Research, 13:2107–2143, 2012. [8] S. Gopi, P. Netrapalli, P. Jain, and A. Nori. One-bit Compressed Sensing: Provable Support and Vector Recovery. In ICML, 2013. [9] L. Jacques, K. Degraux, and C. De Vleeschouwer. Quantized iterative hard thresholding: Bridging 1-bit and high-resolution quantized compressed sensing. arXiv:1305.1786, 2013. [10] L. Jacques, D. Hammond, and M. Fadili. Dequantizing compressed sensing: When oversampling and non-gaussian constraints combine. IEEE Transactions on Information Theory, 57:559–571, 2011. [11] L. Jacques, J. Laska, P. Boufounos, and R. Baraniuk. Robust 1-bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors. IEEE Transactions on Information Theory, 59:2082–2102, 2013. [12] J. Kieffer. Uniqueness of locally optimal quantizer for log-concave density and convex error weighting function. IEEE Transactions on Information Theory, 29:42–47, 1983. [13] J. Laska and R. Baraniuk. Regime change: Bit-depth versus measurement-rate in compressive sensing. arXiv:1110.3450, 2011. [14] J. Laska, P. Boufounos, M. Davenport, and R. Baraniuk. Democracy in action: Quantization, saturation, and compressive sensing. Applied and Computational Harmonic Analysis, 31:429–443, 2011. [15] P. Li. Binary and Multi-Bit Coding for Stable Random Projections. arXiv:1503.06876, 2015. [16] P. Li. One scan 1-bit compressed sensing. Technical report, arXiv:1503.02346, 2015. [17] P. Li, C.-H. Zhang, and T. Zhang. Compressed counting meets compressed sensing. In COLT, 2014. [18] J. Liu and S. Wright. Robust dequantized compressive sensing. Applied and Computational Harmonic Analysis, 37:325–346, 2014. [19] S. Lloyd. Least Squares Quantization in PCM. IEEE Transactions on Information Theory, 28:129–137, 1982. [20] J. Max. Quantizing for Minimum Distortion. IRE Transactions on Information Theory, 6:7–12, 1960. [21] D. Needell and J. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, 26:301–321, 2008. [22] Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming. Communications on Pure and Applied Mathematics, 66:1275–1297, 2013. [23] Y. Plan and R. Vershynin. Robust 1-bit compressed sensing and sparse logistic regression: a convex programming approach. IEEE Transactions on Information Theory, 59:482–494, 2013. [24] R. Zhu and Q. Gu. Towards a Lower Sample Complexity for Robust One-bit Compressed Sensing. In ICML, 2015. [25] R. Vershynin. In: Compressed Sensing: Theory and Applications, chapter ’Introduction to the nonasymptotic analysis of random matrices’. Cambridge University Press, 2012. [26] M. Wainwright. Sharp thresholds for noisy and high-dimensional recovery of sparsity using ℓ1constrained quadratic programming (Lasso). IEEE Transactions on Information Theory, 55:2183–2202, 2009. [27] C.-H. Zhang and T. Zhang. A general theory of concave regularization for high-dimensional sparse estimation problems. Statistical Science, 27:576–593, 2013. [28] L. Zhang, J. Yi, and R. Jin. Efficient algorithms for robust one-bit compressive sensing. In ICML, 2014. [29] T. Zhang. Adaptive Forward-Backward Greedy Algorithm for Learning Sparse Representations. IEEE Transactions on Information Theory, 57:4689–4708, 2011. 9 | 2015 | 50 |
5,940 | Spectral Norm Regularization of Orthonormal Representations for Graph Transduction Rakesh Shivanna Google Inc. Mountain View, CA, USA rakeshshivanna@google.com Bibaswan Chatterjee Dept. of Computer Science & Automation Indian Institute of Science, Bangalore bibaswan.chatterjee@csa.iisc.ernet.in Raman Sankaran, Chiranjib Bhattacharyya Dept. of Computer Science & Automation Indian Institute of Science, Bangalore ramans,chiru@csa.iisc.ernet.in Francis Bach INRIA - Sierra Project-team ´Ecole Normale Sup´erieure, Paris, France francis.bach@ens.fr Abstract Recent literature [1] suggests that embedding a graph on an unit sphere leads to better generalization for graph transduction. However, the choice of optimal embedding and an efficient algorithm to compute the same remains open. In this paper, we show that orthonormal representations, a class of unit-sphere graph embeddings are PAC learnable. Existing PAC-based analysis do not apply as the VC dimension of the function class is infinite. We propose an alternative PAC-based bound, which do not depend on the VC dimension of the underlying function class, but is related to the famous Lov´asz ϑ function. The main contribution of the paper is SPORE, a SPectral regularized ORthonormal Embedding for graph transduction, derived from the PAC bound. SPORE is posed as a non-smooth convex function over an elliptope. These problems are usually solved as semi-definite programs (SDPs) with time complexity O(n6). We present, Infeasible Inexact proximal (IIP): an Inexact proximal method which performs subgradient procedure on an approximate projection, not necessarily feasible. IIP is more scalable than SDP, has an O( 1 √ T ) convergence, and is generally applicable whenever a suitable approximate projection is available. We use IIP to compute SPORE where the approximate projection step is computed by FISTA, an accelerated gradient descent procedure. We show that the method has a convergence rate of O( 1 √ T ). The proposed algorithm easily scales to 1000’s of vertices, while the standard SDP computation does not scale beyond few hundred vertices. Furthermore, the analysis presented here easily extends to the multiple graph setting. 1 Introduction Learning problems on graph-structured data have received significant attention in recent years [11, 17, 20]. We study an instance of graph transduction, the problem of learning labels on vertices of simple graphs1. A typical example is webpage classification [20], where a very small part of the entire web is manually classified. Even for simple graphs, predicting binary labels of the unlabeled vertices is NP-complete [6]. More formally: let G = (V, E), V = [n] be a simple graph with unknown labels y ∈{±1}n. Without loss of generality, let the labels of first m ∈[n] vertices be observable, let u := n −m. 1A simple graph is an unweighted, undirected graph with no self loops or multiple edges. 1 Let yS and y ¯S be the labels of S = [m] and ¯S = V \S. Given G and yS, the goal is to learn soft predictions ˆy ∈Rn, such that erℓ ¯S[ˆy] := 1 | ¯S| P j∈¯S ℓ(yj, ˆyj) is small, where ℓis any loss function. The following formulation has been extensively used [19, 20] min ˆy∈Rn erℓ S[ˆy] + λˆy⊤K−1ˆy, (1) where K is a graph-dependent kernel and λ > 0 is a regularizer constant. Let ˆy∗be the solution to (1), given G and S ⊆V, |S| = m. [1] proposed the following generalization bound ES⊆V erℓ ¯S[ˆy∗] ≤c1 inf ˆy∈Rn h erℓ V [ˆy] + λˆy⊤K−1ˆy i + c2 trp(K) λ|S| p , (2) where c1, c2 are dependent on ℓand trp(K) = 1 n P i∈[n] Kp ii 1/p, p > 0. [1] argued that trp(K) should be a constant and can be enforced by normalizing the diagonal entries of K to be 1. This is an important advice in graph transduction, however it is to be noted that the set of normalized kernels is quite large and (2) gives little insight in choosing the optimal kernel. Normalizing the diagonal entries of K can be viewed geometrically as embedding the graph on a unit sphere. Recently, [16] studied a rich class of unit sphere graph embeddings, called orthonormal representations [13], and find that it is statistically consistent for graph transduction. However, the choice of the optimal orthonormal embedding is not clear. We study orthonormal representations for the following equivalent [19] kernel learning formulation of (1), with C = 1 λm, ωC(K, yS) = max α∈Rn X i∈S αi −1 2 X i,j∈S αiαjyiyjKij s.t. 0 ≤αi ≤C ∀i ∈S, αj = 0 ∀j /∈S, (3) from a probably approximately correctly (PAC) learning point of view. Note that the final predictions are given by ˆyi = P j∈S Kijα∗ jyj ∀i ∈[n], where α∗is the optimal solution to (3). Contributions. We make the following contributions: – Using (3) we show the class of orthonormal representations are efficiently PAC learnable over a large class of graph families, including power-law and random graphs. – The above analysis suggests that spectral norm regularization could be beneficial in computing the best embedding. To this end we pose the problem of SPectral norm regularized ORthonormal Embedding (SPORE) for graph Transduction, namely that of minimizing a convex function over an elliptope. One could solve such problems as SDPs which unfortunately do not scale well beyond few hundred vertices. – We propose an infeasible inexact proximal (IIP) method, a novel projected subgradient descent algorithm, in which the projection is approximated by an inexact proximal method. We suggest a novel approximation criteria which approximates the proximal operator for the support function of the feasible set within a given precision. One could compute an approximation to the projection from the inexact proximal point which may not be feasible hence the name IIP. We prove that IIP converges to the optimal minimum of a non-smooth convex function with rate O(1/ √ T) in T iterations. – The IIP algorithm is then applied to the case when the set of interest is composed of the intersection of two convex sets. The proximal operator for the support function of the set of interest can be obtained using the FISTA algorithm, once we know the proximal operator for the support functions of the individual sets involved. – Our analysis paves the way for learning labels on multiple graphs by using the embedding by adopting an MKL style approach. We present both algorithmic and generalization results. Notations. Let ∥· ∥, ∥· ∥F denote the Euclidean and Frobenius norm respectively. Let Sn and S+ n denote the set of n × n square symmetric and square symmetric positive semi-definite matrices respectively. Let Rn + be a non-negative orthant. Let Sn−1 = u ∈Rn + ∥u∥1 = 1 denote the n−1 dimensional simplex. Let [n] := {1, . . . , n}. For any M ∈Sn, let λ1(M) ≥. . . ≥λn(M) denote its Eigenvalues. We denote the adjacency matrix of a graph G by A. Let ¯G denote the complement graph of G, with the adjacency matrix ¯A = 11⊤−I −A; where 1 is a vector of all 1’s, and I is the identity matrix. Let Y = {±1}, bY = R be the label and soft-prediction spaces over V . Given y ∈Y 2 and ˆy ∈bY, we use ℓ0-1(y, ˆy) = 1[yˆy < 0], ℓhng(y, ˆy) = (1 −yˆy)+2 to denote 0-1 and hinge loss respectively. The notations O, o, Ω, Θ will denote standard measures in asymptotic analysis [4]. Related work. [1]’s analysis was restricted to Laplacian matrices, and does not give insights in choosing the optimal unit sphere embedding. [2] studied graph transduction using PAC model, however for graph orthonormal embeddings, there is no known sample complexity estimate. [16] showed that working with orthonormal embeddings leads to consistency. However, the choice of optimal embedding and an efficient algorithm to compute the same remains an open issue. Furthermore, we show that [16]’s sample complexity estimate is sub-optimal. Preliminaries. An orthonormal embedding [13] of a simple graph G = (V, E), V = [n], is defined by a matrix U = [u1, . . . , un] ∈Rd×n such that u⊤ i uj = 0 whenever (i, j) /∈E and ∥ui∥= 1 ∀i ∈[n]. Let Lab(G) denote the set of all possible orthonormal embeddings of the graph G, Lab(G) := U | U is an orthonormal embedding . Recently, [8] showed an interesting connection to the set of graph kernel matrices K(G) := K ∈S+ n | Kii = 1, ∀i ∈[n]; Kij = 0, ∀(i, j) /∈E . Note that K ∈K(G) is positive semidefinite, and hence there exists U ∈Rd×n such that K = U⊤U. Note that Kij = u⊤ i uj where ui is the i-th column of U. Hence by inspection it is clear that U ∈Lab(G). Using a similar argument, we can show that for any U ∈Lab(G), the matrix K = U⊤U ∈K(G). Thus, the two sets, Lab(G) and K(G) are equivalent. Furthermore, orthonormal embeddings are associated with an interesting quantity, the Lov´asz ϑ function [13, 7]. However, computing ϑ requires solving an SDP, which is impractical. 2 Generalization Bound for Graph Transduction using Orthonormal Embeddings In this section we derive a generalization bound, used in the sequel for PAC analysis. We derive the following error bound, valid for any orthonormal embedding (supplementary material, Section B). Theorem 1 (Generalization bound). Let G = (V, E) be a simple graph with unknown binary labels y ∈Yn on the vertices V . Let K ∈K(G). Given G, and labels of a randomly drawn subgraph S, let ˆy ∈bYn be the predictions learnt by ωC(K, yS) in (3). Then, for m ≤n/2, with probability ≥1 −δ over the choice of S ⊂V , such that |S| = m er0-1 ¯S [ˆy] ≤1 m X i∈S ℓhng(yi, ˆyi) + 2C p 2λ1(K) + O r 1 m log 1 δ . (4) Note that the above is a high-probability bound, in comparison to the expected analysis in (2). Also, the above result suggests that graph embeddings with low spectral norm and empirical error lead to better generalization. [1]’s analysis in (2) suggests that we should embed a graph on a unit sphere, however, does not help to choose the optimal embedding for graph transduction. Exploiting our analysis from (4), we present a spectral norm regularized algorithm in Section 3. We would also like to study PAC learnability of orthonormal embeddings, where PAC learnability is defined as follows: given G, y; does there exist an ˜m < n, such that w.p. ≥1 −δ over S ⊂V, |S| ≥˜m; the generalization error er0-1 ¯S ≤ϵ. The quantity ˜m is termed as labelled sample complexity [2]. Existing analysis [2] do not apply to orthonormal embeddings as discussed in related work, Section 1. Theorem 1 allows us to derive improved statistical estimates (Section 3). 3 SPORE Formulation and PAC Analysis Theorem 1 suggests that penalizing the spectral norm of K would lead to better generalization. To this end we motivate the following formulation. ΨC,β(G, yS) = min K∈K(G) g K where g(K) = ωC(K, yS) + βλ1(K). (5) 2(a)+ = max(a, 0) ∀a ∈R 3 (5) gives an optimal orthonormal embedding, the optimal K, which we will refer to as SPORE. In this section we first study the PAC learnability of SPORE, and derive a labelled sample complexity estimate. Next, we study efficient computation of SPORE. Though SPORE can be posed as an SDP, we show in Section 4 that it is possible to exploit the structure, and solve efficiently. Given G and yS, the function ωC(K, yS) is convex in K as it is the maximum of affine functions of K. The spectral norm of K, λ1(K) is also convex, and hence g(K) is a convex function. Furthermore K(G) is an Elliptope [5], a convex body which can be described by the intersection of a positive semi-definite and affine constraints. It follows that hence (5) is convex. Usually these formulations are posed as SDPs which do not scale beyond few hundred vertices. In Section 4 we derive an efficient first order method which can solve for 1000’s of vertices. Let K∗be the optimal embedding computed from (5). Note that once the kernel is fixed, the predictions are only dependent on ωC(K∗, yS). Let α∗be the solution to ωC(K∗, yS) as in (3), then the final predictions of (5) is given by ˆyi = P j∈S K∗ ijα∗ jyj, ∀i ∈[n]. At this point, we derive an interesting graph-dependent error convergence rate. We gather two important results, the proof of which appears in the supplementary material, Section C. Lemma 2. Given a simple graph G = (V, E), maxK∈K(G) λ1(K) = ϑ( ¯G). Lemma 3. Given G and y, for any S ⊆V and C > 0, minK∈K(G) ωC(K, yS) ≤ϑ(G)/2. In the standard PAC setting, there is a complete disconnection between the data distribution and target hypothesis. However, in the presence of unlabeled nodes, without any assumption on the data, it is impossible to learn labels. Following existing literature [1, 9], we work with similarity graphs – where presence of an edge would mean two nodes are similar; and derive the following (supplementary material, Section C). Theorem 4. Let G = (V, E), V = [n] be a simple graph with unknown binary labels y ∈Yn on the vertices V . Given G, and labels of a randomly drawn subgraph S ⊂V , m = |S|; let ˆy be the predictions learnt by SPORE (5), for parameters C = ϑ(G) m q ϑ( ¯ G) 1 2 and β = ϑ(G) ϑ( ¯ G). Then, for m ≤n/2, with probability ≥1 −δ over the choice of S ⊂V , such that |S| = m er0-1 ¯S [ˆy] = O 1 m p nϑ(G) + log 1 δ 1 2 . (6) Proof. (Sketch) Let K∗be the kernel learnt by SPORE (5). Using Theorem 1 and Lemma 2 for ˆy er0-1 ¯S [ˆy] ≤1 m X i∈S ℓhng(yi, ˆyi) + 2C q 2ϑ ¯G + O r 1 m log 1 δ . (7) From the primal formulation of (3), using Lemma 2 and 3, we get C X i∈S ℓhng(yi, ˆyi) ≤ωC(K∗, yS) ≤ΨC,β(G, yS) ≤ϑ(G) 2 + βϑ ¯G . Plugging back in (7), choosing β such that β Cmϑ ¯G = 2C q 2ϑ ¯G and optimizing for C gives us the choice of parameters as stated. Finally, using ϑ(G)ϑ( ¯G) = n [13] proves the result. In the theorem above, ¯G is the complement graph of G. The optimal orthonormal embedding K∗ tend to embed vertices to nearby regions if they have connecting edges, hence, the notion of similarity is implicitly captured in the embedding. From (6), for a fixed n and m, note that the error converges at a faster rate for a dense graph (ϑ is small), than for a sparse graph (ϑ is large). Such connections relating to graph structural properties were previously unavailable [1]. We also estimate the labelled sample complexity, by bounding (6) by ϵ > 0, to obtain ˜m = Ω 1 ϵ2 ( √ ϑn + log 1 δ ) . This connection helps to reason the intuition that for a sparse graph one would need a larger number of labelled vertices, than for a dense graph. For constants ϵ, δ, we obtain a fractional labelled sample complexity estimate of ˜m/n = Ω ϑ/n 1 2 , which is a significant improvement over the recently proposed Ω ϑ/n 1 3 [16]. The use of stronger machinery of 4 Rademacher averages (supplementary material, Section C), instead of VC-dimension [2], and specializing to SPORE allows us to improve over existing analysis [1, 16]. The proposed sample complexity estimate is interesting for ϑ = o(n), examples of such graphs include: random graphs (ϑ(G(n, p)) = Θ(√n)) and power-law graphs (¯ϑ = O(√n)). 4 Inexact Proximal methods for SPORE In this section, we propose an efficient algorithm to solve SPORE (see (5)). The optimization problem SPORE can be posed as an SDP. Generic SDP solvers have a runtime complexity of O(n6) and often does not scale well for large graphs. We study first-order methods, such as projected subgradient procedures, as an alternative to SDPs, for minimizing g(K). The main computational challenge in developing such procedures is that it is difficult to compute the projection on the elliptope. One could potentially use the seminal Dykstra’s algorithm [3] of finding a feasible point in the intersection of two convex sets. The algorithm asymptotically finds a point in the intersection. This asymptotic convergence is a serious disadvantage in the usage of Dykstra’s algorithm as a projection sub-routine. It would be useful to have an algorithm which after a finite number of iterations yield an approximate projection and a subsequent descent algorithm can yield a convergent algorithm. Motivated by SPORE, we study the problem of minimizing non-smooth convex functions where the projection onto the feasible set can be computed only approximately. Recently there has been increasing interest in studying Inexact proximal methods [15, 18]. In the sequel we design an inexact proximal method which yields an O(1/ √ T) algorithm to solve (5). The algorithm is based on approximating the prox function by an iterative procedure which satisfies a suitably designed criterion. 4.1 An Infeasible Inexact Proximal (IIP) algorithm Let f be a convex function with properly defined sub-differential ∂f(x) at every x ∈X. Consider the following optimization problem. min x∈X⊂Rd f(x). (8) A subgradient projection iteration of the form xk+1 = PX (xk −αkhk), hk ∈∂f(xk) (9) is often used to arrive at an ϵ accurate solution by running the iterations O( 1 ϵ2 ) number of times, where PX (v) is the projection of v ∈Rd on X ⊂Rd if PX (v) = argminx∈X 1 2∥v −x∥2 F . In many situations, such as X = K(G), it is not possible to accurately compute the projection in finite amount of time and one may obtain only an approximate projection. Using the Moreau decomposition PX (v) + ProxσX (v) = v [14], one can compute the projection if one could compute proxσX, where σA(a) = maxa∈X x⊤a is the support function of X, and proxσX refers to the proximal operator for the function g′ at v as defined below3. proxg′(v) = argmin z∈Dom(g′) pg′(z; v) = 1 2∥v −z∥2 + g′(z) . (10) We assume that one could compute zϵ X (v), not necessarily in X, such that pσX (zϵ X (v); v) ≤min z∈Rn pσX (z; v) + ϵ, and P ϵ X (v) = v −zϵ X . (11) See that zϵ X is an inexact prox and the resultant estimate of the projection P ϵ X can be infeasible but hopefully not too far away. Note that ϵ = 0 recovers the exact case. The next theorem confirms that it is possible to converge to the true optimum for a non-zero ϵ (supplementary material, Section D.5). Theorem 5. Consider the optimization problem (8). Starting from any ∥x0 −x∗∥≤R, where x∗is a solution of (8), for every k let us assume that we could obtain P ϵ X (yk) such that zk = yk −P ϵ X (yk) satisfy (11), where yk = xk −αkhk, αk = s ∥hk∥, ∥hk∥≤L, ∥xk −x∗∥≤R, s = q R2 T + ϵ. Then the iterates xk+1 = P ϵ X (xk −αkhk), hk ∈∂f(xk) (12) 3A more general definition of the proximal operator is – proxτ g′(v) = argminz∈Dom(g′) 1 2τ ∥v−z∥2+g′(z) 5 yield f ∗ T −f ∗≤L r R2 T + ϵ. (13) Related Work on Inexact Proximal methods: There has been recent interest in deriving inexact proximal methods such as projected gradient descent, see [15, 18] for a comprehensive list of references. To the best of our knowledge, composite functions have been analyzed but no one has explored the case that f is non-smooth. The results presented here are thus complementary to [15, 18]. Note the subtlety in using the proper approximation criteria. Using a distance criterion between the true projection and the approximate projection, or an approximate optimality criteria on the optimal distance would lead to a worse bound; using a dual approximate optimality criterion (here through the proximal operator for the support function) is key (as noted in [15, 18] and references therein). As an immediate consequence of Theorem 5, note that suppose we have an algorithm to compute proxσX which guarantees after S iterations that pσX (zS; v) −min z∈Rd pσX (z; v) ≤ ˆR2 S2 , (14) for a constant ˆR particular to the set over which pσX is defined. We can initialize ϵ = ˆ R2 S2 in (13), that may suggest that one could use S = √ T iterations to yield f ∗ T −f ∗≤L ¯R √ T where ¯R = q R2 + ˆR2. (15) Remarks: Computational efficiency dictates that the number of projection steps should be kept at a minimum. To this end we see that number of projection steps need to be at least S = √ T with the current choice of stepsizes. Let cp be the cost of one iteration of FISTA step and c0 be the cost of one outer iteration. The total computation cost can be then estimated as T 3/2 · cp + T · c0. 4.2 Applying IIP to compute SPORE The problem of computing SPORE can be posed as minimizing a non-smooth convex function over an intersection of two sets: K(G) = S+ n ∩P(G), intersection of positive semi-definite cone S+ n and a polytope of equality constraints P(G) := {M ∈Sn|Mii = 1, Mij = 0 ∀(i, j) /∈E}. The algorithm described in Theorem 5 readily applies to the new setting if the projection can be computed efficiently. The proximal operator for σX can be derived as 4 ProxσX (v) = argmin a,b∈Rd pσX (a, b; v) = 1 2∥(a + b) −v∥2 + σA(a) + σB(b) . (16) This means that even if we do not have an efficient procedure for computing ProxσX (v) directly, we can devise an algorithm to guarantee the approximation (11) if we can compute ProxσA(v) and ProxσB(v) efficiently. This can be done through the application of the popular FISTA algorithm for (16), which also guarantees (14). Algorithm 1 (detailed in the supplementary, named IIP FISTA), computes the following simple steps followed by the usual FISTA variable updates at each iteration t : (a) gradient descent step on a and b with respect to the smooth term 1 2∥(a + b) −v∥2 and (b) proximal step with respect to σA and σB using the expressions (14), (21) (supplementary material). Using the tools discussed above, we design Algorithm 1 to solve the SPORE formulation (5) using IIP. The proposed algorithm readily applies to general convex sets. However, we confine ourselves to specific sets of interest in our problem. The following theorem states the convergence rate of the proposed procedure. Theorem 6. Consider the optimization problem (8) with X = A T B, where A and B are S+ n and P(G) respectively. Starting from any K0 ∈A the iterates Kt in Algorithm (1) satisfy min t=0,...,T f(Kt) −f(K∗) ≤ L √ T q R2 + ˆR2. Proof. Is an immediate extension of Theorem 5 – supplementary material, Section D.6. 4The derivation is presented in supplementary material, Claim 6. 6 Algorithm 1 IIP for SPORE 1: function APPROX-PROJ-SUBG(K0, L, R, ˆR, T) 2: s = L √ T · p R2 + ˆR2 ▷compute stepsize 3: Initialize t0 = 1. 4: for t = 1, . . . , T do 5: compute ht−1 ▷subgradient of f(K) at Kt−1 see equation (5) 6: vt = Kt−1 − s ∥ht−1∥ht−1 7: ˜Kt = IIP FISTA(vt, √ T) ▷FISTA for √ T steps. Use Algorithm 1 (supp.) 8: Kt = ProjA( ˜ Kt) = Kt −proxσA(Kt) 9: ▷Kt needs to be psd for the next SVM call. Use (14) (supp.) 10: end for 11: end function Equating the problem (8) with the SPORE problem (5), we have f(K) = ωC(K, yS) + βλ1(K). The set of subgradients of f at the iteration t is given by ∂f(Kt) = −1 2Yαtα⊤ t Y + βvtv⊤ t |αt is returned by SVM, and vt is the eigen vector corresponding to λ1(Kt) 5, where Y be a diagonal matrix such that Yii = yi, for i ∈S, and 0 otherwise. The step size is calculated using estimates of L, R and ˆR, which can be derived as L = nC2, R = n, ˆR = n2.5 for the SPORE problem. Check the supplementary material for the derivations. 5 Multiple Graph Transduction Multiple graph transduction is of recent interest in a multi-view setting, where individual views are expressed by a graph. This includes many practical problems in bioinformatics [17], spam detection [21], etc. We propose an MKL style extension of SPORE, with improved PAC bounds. Formally, the problem of multiple graph transduction is stated as – let G = {G(1), . . . , G(M)} be a set of simple graphs G(k) = (V, E(k)), defined on a common vertex set V = [n]. Given G and yS as before, the goal is to accurately predict y ¯S. Following the standard technique of taking convex combination of graph kernels [16], we propose the following MKL-SPORE formulation ΦC,β(G, yS) = min K(k)∈K(G(k)) min η∈SM−1 ωC X k∈[M] ηkK(k), yS + β max k∈[M] λ1(K(k)) . (17) Similar to Theorem 4, we can show the following (supplementary material, Theorem 8) er0-1 ¯S [ˆy] = O 1 m p nϑ(G) + log 1 δ 1 2 where ϑ(G) ≤min k∈[M] ϑ(G(k)). (18) It immediately follows that combining multiple graphs improves the error convergence rate (see (6)), and hence the labelled sample complexity. Also, the bound suggests that the presence of at least one “good” graph is sufficient for MKL-SPORE to learn accurate predictions. This motivates us to use the proposed formulation in the presence of noisy graphs (Section 6). We can also apply the IIP algorithm described in Section 4 to solve for (17) (supplementary material, Section F). 6 Experiments We conducted experiments on both real world and synthetic graphs, to illustrate our theoretical observations. All experiments were run on CPU with 2 Xeon Quad-Core processors (2.66GHz, 12MB L2 Cache) and 16GB memory running CentOS 5.3. 5αt = argmaxα∈Rn +, ∥α∥∞≤C αj=0 ∀j /∈S α⊤1 −1 2α⊤YKtYα and vt = argmaxv∈Rn,∥v∥=1 v⊤Ktv 7 Table 1: SPORE comparison. Dataset Un-Lap N-Lap KS SPORE breast-cancer 88.22 93.33 92.77 96.67 diabetes 68.89 69.33 69.44 73.33 fourclass 70.00 70.00 70.44 78.00 heart 71.97 75.56 76.42 81.97 ionosphere 67.77 68.00 68.11 76.11 sonar 58.81 58.97 59.29 63.92 mnist-1vs2 75.55 80.55 79.66 85.77 mnist-3v8 76.88 81.88 83.33 86.11 mnist-4v9 68.44 72.00 72.22 74.88 Table 2: Large Scale – 2000 Nodes. Dataset Un-Lap N-Lap KS SPORE mnist-1vs2 83.80 96.23 94.95 96.72 mnist-3vs8 55.15 87.35 87.35 91.35 mnist-5vs6 96.30 94.90 92.05 97.35 mnist-1vs7 90.65 96.80 96.55 97.25 mnist-4vs9 65.55 65.05 61.30 87.40 Graph Transduction (SPORE): We use two datasets UCI [12] and MNIST [10]. For the UCI datasets, we use the RBF kernel6 and threshold with the mean, and for the MNIST datasets we construct a similarity matrix using cosine distance for a random sample of 500 nodes, and threshold with 0.4 to obtain unweighted graphs. With 10% labelled nodes, we compare SPORE with formulation (3) using graph kernels – Unnormalized Laplacian (c1I + L)−1, Normalized Laplacian c2I + D−1 2 LD−1 2 −1 and K-Scaling [1], where L = D −A, D being a diagonal matrix of degrees. We choose parameters c1, c2, C and β by cross validation. Table 1 summarizes the results, averaged over 5 different labelled samples, with each entry being accuracy in % w.r.t. 0-1 loss function. As expected from Section 3, SPORE significantly outperforms existing methods. We also tackle large scale graph transduction problems, Table 2 shows superior performance of Algorithm 1 for a random sample of 2000 nodes, with only 5 outer iterations and 20 inner projections. Multiple Graph Transduction (MKL-SPORE): We illustrate the effectiveness of combining multiple graphs, using mixture of random graphs – G(p, q), p, q ∈[0, 1] where we fix |V | = n = 100 and the labels y ∈Yn such that yi = 1 if i ≤n/2; −1 otherwise. An edge (i, j) is present with probability p if yi = yj; otherwise present with probability q. We generate three datasets to simulate homogenous, heterogenous and noisy case, shown in Table 3. Table 3: Synthetic multiple graphs dataset. Graph Homo. Heter. Noisy G(1) G(0.7, 0.3) G(0.7, 0.5) G(0.7, 0.3) G(2) G(0.7, 0.3) G(0.6, 0.4) G(0.6, 0.4) G(3) G(0.7, 0.3) G(0.5, 0.3) G(0.5, 0.5) Table 4: Superior performance of MKL-SPORE. Graph Homo. Heter. Noisy G(1) 84.4 69.2 84.4 G(2) 84.8 68.6 68.2 G(3) 86.4 72.0 54.4 Union 85.5 69.3 69.3 Intersection 83.8 67.5 69.0 Majority 93.7 76.9 76.6 Multiple Graphs 95.6 80.6 81.9 MKL-SPORE was compared with individual graphs; and with the union, intersection and majority graphs7. We use SPORE to solve for single graph transduction, and the results were averaged over 10 random samples of 5% labelled nodes. For the comparison metric as before, Table 4 shows that combining multiple graphs improves classification accuracy. Furthermore, the noisy case illustrates the robustness of the proposed formulation, a key observation from (18). 7 Conclusion We show that the class of orthonormal graph embeddings are efficiently PAC learnable. Our analysis motivates a Spectral Norm regularized formulation – SPORE for graph transduction. Using inexact proximal method, we design an efficient first order method to solve for the proposed formulation. The algorithm and analysis presented readily generalize to the multiple graphs setting. Acknowledgments We acknowledge support from a grant from Indo-French Center for Applied Mathematics (IFCAM). 6The (i, j)th entry of an RBF kernel is given by exp ∥xi−xj∥2 2σ2 , where σ is set as the mean distance. 7Majority graph is a graph where an edge (i, j) is present, if a majority of the graphs have the edge (i, j). 8 References [1] R. K. Ando and T. Zhang. Learning on graph with Laplacian regularization. In NIPS, 2007. [2] N. Balcan and A. Blum. An augmented PAC model for semi-supervised learning. In O. Chapelle, B. Sch¨olkopf, and A. Zien, editors, Semi-supervised learning. MIT press Cambridge, 2006. [3] J. P. Boyle and R. L. Dykstra. A Method for Finding Projections onto the Intersection of Convex Sets in Hilbert Spaces. In Advances in Order Restricted Statistical Inference, volume 37 of Lecture Notes in Statistics, pages 28–47. Springer New York, 1986. [4] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein. Introduction to algorithms, volume 2. MIT press Cambridge, 2001. [5] M. Eisenberg-Nagy, M. Laurent, and A. Varvitsiotis. Forbidden minor characterizations for low-rank optimal solutions to semidefinite programs over the elliptope. J. Comb. Theory, Ser. B, 108:40–80, 2014. [6] A. Erdem and M. Pelillo. Graph transduction as a Non-Cooperative Game. Neural Computation, 24(3):700–723, 2012. [7] M. X. Goemans. Semidefinite programming in combinatorial optimization. Mathematical Programming, 79(1-3):143–161, 1997. [8] V. Jethava, A. Martinsson, C. Bhattacharyya, and D. P. Dubhashi. The Lov´asz ϑ function, SVMs and finding large dense subgraphs. In NIPS, pages 1169–1177, 2012. [9] R. Johnson and T. Zhang. On the Effectiveness of Laplacian Normalization for Graph Semi-supervised Learning. JMLR, 8(7):1489–1517, 2007. [10] Y. LeCun and C. Cortes. The MNIST database of handwritten digits, 1998. [11] M. Leordeanu, A. Zanfir, and C. Sminchisescu. Semi-supervised learning and optimization for hypergraph matching. In ICCV, pages 2274–2281. IEEE, 2011. [12] M. Lichman. UCI machine learning repository, 2013. [13] L. Lov´asz. On the shannon capacity of a graph. Information Theory, IEEE Transactions on, 25(1):1–7, 1979. [14] N. Parikh and S. Boyd. Proximal algorithms. Foundations and Trends in optimization, 1(3):123–231, 2013. [15] M. Schmidt, N. L. Roux, and F. R. Bach. Convergence rates of inexact proximal-gradient methods for convex optimization. In NIPS, pages 1458–1466, 2011. [16] R. Shivanna and C. Bhattacharyya. Learning on graphs using Orthonormal Representation is Statistically Consistent. In NIPS, pages 3635–3643, 2014. [17] L. Tran. Application of three graph Laplacian based semi-supervised learning methods to protein function prediction problem. IJBB, 2013. [18] S. Villa, S. Salzo, L. Baldassarre, and A. Verri. Accelerated and Inexact Forward-Backward Algorithms. SIAM Journal on Optimization, 23(3):1607–1633, 2013. [19] T. Zhang and R. K. Ando. Analysis of spectral kernel design based semi-supervised learning. NIPS, 18:1601, 2005. [20] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. NIPS, 16(16):321–328, 2004. [21] D. Zhou and C. J. C. Burges. Spectral clustering and transductive learning with multiple views. In ICML, pages 1159–1166. ACM, 2007. 9 | 2015 | 51 |
5,941 | Randomized Block Krylov Methods for Stronger and Faster Approximate Singular Value Decomposition Cameron Musco Massachusetts Institute of Technology, EECS Cambridge, MA 02139, USA cnmusco@mit.edu Christopher Musco Massachusetts Institute of Technology, EECS Cambridge, MA 02139, USA cpmusco@mit.edu Abstract Since being analyzed by Rokhlin, Szlam, and Tygert [1] and popularized by Halko, Martinsson, and Tropp [2], randomized Simultaneous Power Iteration has become the method of choice for approximate singular value decomposition. It is more accurate than simpler sketching algorithms, yet still converges quickly for any matrix, independently of singular value gaps. After ˜O(1/ϵ) iterations, it gives a low-rank approximation within (1 + ϵ) of optimal for spectral norm error. We give the first provable runtime improvement on Simultaneous Iteration: a randomized block Krylov method, closely related to the classic Block Lanczos algorithm, gives the same guarantees in just ˜O(1/√ϵ) iterations and performs substantially better experimentally. Our analysis is the first of a Krylov subspace method that does not depend on singular value gaps, which are unreliable in practice. Furthermore, while it is a simple accuracy benchmark, even (1 + ϵ) error for spectral norm low-rank approximation does not imply that an algorithm returns high quality principal components, a major issue for data applications. We address this problem for the first time by showing that both Block Krylov Iteration and Simultaneous Iteration give nearly optimal PCA for any matrix. This result further justifies their strength over non-iterative sketching methods. 1 Introduction Any matrix A ∈Rn×d with rank r can be written using a singular value decomposition (SVD) as A = UΣVT . U ∈Rn×r and V ∈Rd×r have orthonormal columns (A’s left and right singular vectors) and Σ ∈Rr×r is a positive diagonal matrix containing A’s singular values: σ1 ≥. . . ≥σr. A rank k partial SVD algorithm returns just the top k left or right singular vectors of A. These are the first k columns of U or V, denoted Uk and Vk respectively. Among countless applications, the SVD is used for optimal low-rank approximation and principal component analysis (PCA). Specifically, for k < r, a partial SVD can be used to construct a rank k approximation Ak such that both ∥A −Ak∥F and ∥A −Ak∥2 are as small as possible. We simply set Ak = UkUT k A. That is, Ak is A projected onto the space spanned by its top k singular vectors. For principal component analysis, A’s top singular vector u1 provides a top principal component, which describes the direction of greatest variance within A. The ith singular vector ui provides the ith principal component, which is the direction of greatest variance orthogonal to all higher principal components. Formally, denoting A’s ith singular value as σi, uT i AAT ui = σ2 i = max x:∥x∥2=1, x⊥uj∀j<i xT AAT x. Traditional SVD algorithms are expensive, typically running in O(nd2) time, so there has been substantial research on randomized techniques that seek nearly optimal low-rank approximation and 1 PCA [3, 4, 1, 2, 5]. These methods are quickly becoming standard tools in practice and implementations are widely available [6, 7, 8, 9], including in popular learning libraries [10]. Recent work focuses on algorithms whose runtimes do not depend on properties of A. In contrast, classical literature typically gives runtime bounds that depend on the gaps between A’s singular values and become useless when these gaps are small (which is often the case in practice – see Section 6). This limitation is due to a focus on how quickly approximate singular vectors converge to the actual singular vectors of A. When two singular vectors have nearly identical values they are difficult to distinguish, so convergence inherently depends on singular value gaps. Only recently has a shift in approximation goal, along with an improved understanding of randomization, allowed for algorithms that avoid gap dependence and thus run provably fast for any matrix. For low-rank approximation and PCA, we only need to find a subspace that captures nearly as much variance as A’s top singular vectors – distinguishing between two close singular values is overkill. 1.1 Prior Work The fastest randomized SVD algorithms [3, 5] run in O(nnz(A)) time1, are based on non-iterative sketching methods, and return a rank k matrix Z with orthonormal columns z1, . . . , zk satisfying Frobenius Norm Error: ∥A −ZZT A∥F ≤(1 + ϵ)∥A −Ak∥F . (1) Unfortunately, as emphasized in prior work [1, 2, 11, 12], Frobenius norm error is often hopelessly insufficient, especially for data analysis and learning applications. When A has a “heavy-tail” of singular values, which is common for noisy data, ∥A −Ak∥2 F = P i>k σ2 i can be huge, potentially much larger than A’s top singular value. This renders (1) meaningless since Z does not need to align with any large singular vectors to obtain good multiplicative error. To address this shortcoming, a number of papers target spectral norm low-rank approximation error, Spectral Norm Error: ∥A −ZZT A∥2 ≤(1 + ϵ)∥A −Ak∥2, (2) which is intuitively stronger. When looking for a rank k approximation, A’s top k singular vectors are often considered data and the remaining tail is considered noise. A spectral norm guarantee roughly ensures that ZZT A recovers A up to this noise threshold. A series of work [1, 2, 13, 14, 15] shows that the decades old Simultaneous Power Iteration (also called subspace iteration or orthogonal iteration) implemented with random start vectors, achieves (2) after ˜O(1/ϵ) iterations. Hence, this method, which was popularized by Halko, Martinsson, and Tropp in [2], has become the randomized SVD algorithm of choice for practitioners [10, 16]. 2 Our Results Algorithm 1 SIMULTANEOUS ITERATION input: A ∈Rn×d, error ϵ ∈(0, 1), rank k ≤n, d output: Z ∈Rn×k 1: q := Θ( log d ϵ ), Π ∼N(0, 1)d×k 2: K := AAT q AΠ 3: Orthonormalize the columns of K to obtain Q ∈Rn×k. 4: Compute M := QT AAT Q ∈Rk×k. 5: Set ¯Uk to the top k singular vectors of M. 6: return Z = Q ¯Uk. Algorithm 2 BLOCK KRYLOV ITERATION input: A ∈Rn×d, error ϵ ∈(0, 1), rank k ≤n, d output: Z ∈Rn×k 1: q := Θ( log d √ϵ ), Π ∼N(0, 1)d×k 2: K := AΠ, (AAT )AΠ, ..., (AAT )qAΠ 3: Orthonormalize the columns of K to obtain Q ∈Rn×qk. 4: Compute M := QT AAT Q ∈Rqk×qk. 5: Set ¯Uk to the top k singular vectors of M. 6: return Z = Q ¯Uk. 2.1 Faster Algorithm We show that Algorithm 2, a randomized relative of the Block Lanczos algorithm [17, 18], which we call Block Krylov Iteration, gives the same guarantees as Simultaneous Iteration (Algorithm 1) in just ˜O(1/√ϵ) iterations. This not only gives the fastest known theoretical runtime for achieving (2), but also yields substantially better performance in practice (see Section 6). 1Here nnz(A) is the number of non-zero entries in A and this runtime hides lower order terms. 2 Even though the algorithm has been discussed and tested for potential improvement over Simultaneous Iteration [1, 19, 20], theoretical bounds for Krylov subspace and Lanczos methods are much more limited. As highlighted in [11], “Despite decades of research on Lanczos methods, the theory for [randomized power iteration] is more complete and provides strong guarantees of excellent accuracy, whether or not there exist any gaps between the singular values.” Our work addresses this issue, giving the first gap independent bound for a Krylov subspace method. 2.2 Stronger Guarantees In addition to runtime improvements, we target a much stronger notion of approximate SVD that is needed for many applications, but for which no gap-independent analysis was known. Specifically, as noted in [21], while intuitively stronger than Frobenius norm error, (1 + ϵ) spectral norm low-rank approximation error does not guarantee any accuracy in Z for many matrices2. Consider A with its top k + 1 squared singular values all equal to 10 followed by a tail of smaller singular values (e.g. 1000k at 1). ∥A −Ak∥2 2 = 10 but in fact ∥A −ZZT A∥2 2 = 10 for any rank k Z, leaving the spectral norm bound useless. At the same time, ∥A −Ak∥2 F is large, so Frobenius error is meaningless as well. For example, any Z obtains ∥A −ZZT A∥2 F ≤(1.01)∥A −Ak∥2 F . With this scenario in mind, it is unsurprising that low-rank approximation guarantees fail as an accuracy measure in practice. We ran a standard sketch-and-solve approximate SVD algorithm (see Section 3) on SNAP/AMAZON0302, an Amazon product co-purchasing dataset [22, 23], and achieved very good low-rank approximation error in both norms for k = 30: ∥A −ZZT A∥F < 1.001∥A −Ak∥F and ∥A −ZZT A∥2 < 1.038∥A −Ak∥2. However, the approximate principal components given by Z are of significantly lower quality than A’s true singular vectors (see Figure 1). We saw similar results for a number of other datasets. 5 10 15 20 25 30 50 100 150 200 250 300 350 400 450 Index i Singular Value σi 2 = ui T(AAT)ui zi T(AAT)zi Figure 1: Poor per vector error (3) for SNAP/AMAZON0302 returned by a sketch-and-solve approximate SVD that gives very good low-rank approximation in both spectral and Frobenius norm. We address this issue by introducing a per vector guarantee that requires each approximate singular vector z1, . . . , zk to capture nearly as much variance as the corresponding true singular vector: Per Vector Error: ∀i, uT i AAT ui −zT i AAT zi ≤ϵσ2 k+1. (3) The error bound (3) is very strong in that it depends on ϵσ2 k+1, which is better then relative error for A’s large singular values. While it is reminiscent of the bounds sought in classical numerical analysis [24], we stress that (3) does not require each zi to converge to ui in the presence of small singular value gaps. In fact, we show that both randomized Block Krylov Iteration and our slightly modified Simultaneous Iteration algorithm achieve (3) in gap-independent runtimes. 2.3 Main Result Our contributions are summarized in Theorem 1. Its detailed proof is relegated to the full version of this paper [25]. The runtimes are given in Theorems 6 and 7, and the three error bounds shown in Theorems 10, 11, and 12. In Section 4 we provide a sketch of the main ideas behind the result. 2In fact, it does not even imply (1 + ϵ) Frobenius norm error. 3 Theorem 1 (Main Theorem). With high probability, Algorithms 1 and 2 find approximate singular vectors Z = [z1, . . . , zk] satisfying guarantees (1) and (2) for low-rank approximation and (3) for PCA. For error ϵ, Algorithm 1 requires q = O(log d/ϵ) iterations while Algorithm 2 requires q = O(log d/√ϵ) iterations. Excluding lower order terms, both algorithms run in time O(nnz(A)kq). In the full version of this paper we also use our results to give an alternative analysis that does depend on singular value gaps and can offer significantly faster convergence when A has decaying singular values. It is possible to take further advantage of this result by running Algorithms 1 and 2 with a Π that has > k columns, a simple modification for accelerating either method. In Section 6 we test both algorithms on a number of large datasets. We justify the importance of gap independent bounds for predicting algorithm convergence and we show that Block Krylov Iteration in fact significantly outperforms the more popular Simultaneous Iteration. 2.4 Comparison to Classical Bounds Decades of work has produced a variety of gap dependent bounds for Krylov methods [26]. Most relevant to our work are bounds for block Krylov methods with block size equal to k [27]. Roughly speaking, with randomized initialization, these results offer guarantees equivalent to our strong equation (3) for the top k singular directions after O(log(d/ϵ)/ p σk/σk+1 −1) iterations. This bound is recovered in Section 7 of this paper’s full version [25]. When the target accuracy ϵ is smaller than the relative singular value gap (σk/σk+1 −1), it is tighter than our gap independent results. However, as discussed in Section 6, for high dimensional data problems where ϵ is set far above machine precision, gap independent bounds more accurately predict required iteration count. Prior work also attempts to analyze algorithms with block size smaller than k [24]. While “small block” algorithms offer runtime advantages, it is well understood that with b duplicate singular values, it is impossible to recover the top k singular directions with a block of size < b [28]. More generally, large singular value clusters slow convergence, so any small block algorithm must have runtime dependence on the gaps between each adjacent pair of top singular values [29]. 3 Analyzing Simultaneous Iteration Before discussing our proof of Theorem 1, we review prior work on Simultaneous Iteration to demonstrate how it can achieve the spectral norm guarantee (2). Algorithms for Frobenius norm error (1) typically work by sketching A into very few dimensions using a Johnson-Lindenstrauss random projection matrix Π with poly(k/ϵ) columns. An×d × Πd×poly(k/ϵ) = (AΠ)n×poly(k/ϵ) Π is usually a random Gaussian or (possibly sparse) random sign matrix and Z is computed using the SVD of AΠ or of A projected onto AΠ [3, 5, 30]. This “sketch-and-solve” approach is very efficient – the computation of AΠ is easily parallelized and, regardless, pass-efficient in a single processor setting. Furthermore, once a small compression of A is obtained, it can be manipulated in fast memory for the final computation of Z. However, Frobenius norm error seems an inherent limitation of sketch-and-solve methods. The noise from A’s lower r −k singular values corrupts AΠ, making it impossible to extract a good partial SVD if the sum of these singular values (equal to ∥A −Ak∥2 F ) is too large. In order to achieve spectral norm error (2), Simultaneous Iteration must reduce this noise down to the scale of σk+1 = ∥A −Ak∥2. It does this by working with the powered matrix Aq [31].3 By the spectral theorem, Aq has exactly the same singular vectors as A, but its singular values are equal to those of A raised to the qth power. Powering spreads the values apart and accordingly, Aq’s lower singular values are relatively much smaller than its top singular values (see example in Figure 2a). Specifically, q = O( log d ϵ ) is sufficient to increase any singular value ≥(1 + ϵ)σk+1 to be significantly (i.e. poly(d) times) larger than any value ≤σk+1. This effectively denoises our problem – if we use a sketching method to find a good Z for approximating Aq up to Frobenius norm error, Z will have to align very well with every singular vector with value ≥(1 + ϵ)σk+1. It thus provides an accurate basis for approximating A up to small spectral norm error. 3For nonsymmetric matrices we work with (AAT )qA, but present the symmetric case here for simplicity. 4 0 5 10 15 20 0 5 10 15 Index i Singular Value σi Spectrum of A Spectrum of Aq (a) A’s singular values compared to those of Aq, rescaled to match on σ1. Notice the significantly reduced tail after σ8. 0 0.2 0.4 0.6 0.8 1 −5 0 5 10 15 20 25 30 35 40 45 x xO(1/ε) TO(1/√ε)(x) (b) An O(1/√ϵ)-degree Chebyshev polynomial, TO(1/√ϵ)(x), pushes low values nearly as close to zero as xO(1/ϵ). Figure 2: Replacing A with a matrix polynomial facilitates higher accuracy approximation. Computing Aq directly is costly, so AqΠ is computed iteratively – start with a random Π and repeatedly multiply by A on the left. Since even a rough Frobenius norm approximation for Aq suffices, Π can be chosen to have just k columns. Each iteration thus takes O(nnz(A)k) time. When analyzing Simultaneous Iteration, [15] uses the following randomized sketch-and-solve result to find a Z that gives a coarse Frobenius norm approximation to B = Aq and therefore a good spectral norm approximation to A. The lemma is numbered for consistency with our full paper. Lemma 4 (Frobenius Norm Low-Rank Approximation). For any B ∈Rn×d and Π ∈Rd×k where the entries of Π are independent Gaussians drawn from N(0, 1). If we let Z be an orthonormal basis for span (BΠ), then with probability at least 99/100, for some fixed constant c, ∥B −ZZT B∥2 F ≤c · dk∥B −Bk∥2 F . For analyzing block methods, results like Lemma 4 can effectively serve as a replacement for earlier random initialization analysis that applies to single vector power and Krylov methods [32]. σk+1(Aq) ≤ 1 poly(d)σm(Aq) for any m with σm(A) ≥(1 + ϵ)σk+1(A). Plugging into Lemma 4: ∥Aq −ZZT Aq∥2 F ≤cdk · r X i=k+1 σ2 i (Aq) ≤cdk · d · σ2 k+1(Aq) ≤σ2 m(Aq)/ poly(d). Rearranging using Pythagorean theorem, we have ∥ZZT Aq∥2 F ≥∥Aq∥2 F −σ2 m(Aq) poly(d) . That is, Aq’s projection onto Z captures nearly all of its Frobenius norm. This is only possible if Z aligns very well with the top singular vectors of Aq and hence gives a good spectral norm approximation for A. 4 Proof Sketch for Theorem 1 The intuition for beating Simultaneous Iteration with Block Krylov Iteration matches that of many accelerated iterative methods. Simply put, there are better polynomials than Aq for denoising tail singular values. In particular, we can use a lower degree polynomial, allowing us to compute fewer powers of A and thus leading to an algorithm with fewer iterations. For example, an appropriately shifted q = O(log(d)/√ϵ) degree Chebyshev polynomial can push the tail of A nearly as close to zero as AO(log d/ϵ), even if the long run growth of the polynomial is much lower (see Figure 2b). Specifically, we prove the following scalar polynomial lemma in the full version of our paper [25], which can then be applied to effectively denoising A’s singular value tail. Lemma 5 (Chebyshev Minimizing Polynomial). For ϵ ∈(0, 1] and q = O(log d/√ϵ), there exists a degree q polynomial p(x) such that p((1 + ϵ)σk+1) = (1 + ϵ)σk+1 and, 1) p(x) ≥x for x ≥(1 + ϵ)σk+1 2) |p(x)| ≤ σk+1 poly(d) for x ≤σk+1. Furthermore, we can choose the polynomial to only contain monomials with odd powers. 5 Block Krylov Iteration takes advantage of such polynomials by working with the Krylov subspace, K = Π AΠ A2Π A3Π . . . AqΠ , from which we can construct pq(A)Π for any polynomial pq(·) of degree q.4 Since the polynomial from Lemma 5 must be scaled and shifted based on the value of σk+1, we cannot easily compute it directly. Instead, we argue that the very best k rank approximation to A lying in the span of K at least matches the approximation achieved by projecting onto the span of pq(A)Π. Finding this best approximation will therefore give a nearly optimal low-rank approximation to A. Unfortunately, there’s a catch. Surprisingly, it is not clear how to efficiently compute the best spectral norm error low-rank approximation to A lying in a given subspace (e.g. K’s span) [14, 33]. This challenge precludes an analysis of Krylov methods parallel to recent work on Simultaneous Iteration. Nevertheless, since our analysis shows that projecting to Z captures nearly all the Frobenius norm of pq(A), we can show that the best Frobenius norm low-rank approximation to A in the span of K gives good enough spectral norm approximation. By the following lemma, this optimal Frobenius norm low-rank approximation is given by ZZT A, where Z is exactly the output of Algorithm 2. Lemma 6 (Lemma 4.1 of [15]). Given A ∈Rn×d and Q ∈Rm×n with orthonormal columns, ∥A −(QQT A)k∥F = ∥A −Q QT A k ∥F = min C|rank(C)=k ∥A −QC∥F . Q QT A k can be obtained using an SVD of the m × m matrix M = QT (AAT )Q. Specifically, letting M = ¯U¯Σ2 ¯UT be the SVD of M, and Z = Q ¯Uk then Q QT A k = ZZT A. 4.1 Stronger Per Vector Error Guarantees Achieving the per vector guarantee of (3) requires a more nuanced understanding of how Simultaneous Iteration and Block Krylov Iteration denoise the spectrum of A. The analysis for spectral norm low-rank approximation relies on the fact that Aq (or pq(A) for Block Krylov Iteration) blows up any singular value ≥(1 + ϵ)σk+1 to much larger than any singular value ≤σk+1. This ensures that our output Z aligns very well with the singular vectors corresponding to these large singular values. If σk ≥(1 + ϵ)σk+1, then Z aligns well with all top k singular vectors of A and we get good Frobenius norm error and the per vector guarantee (3). Unfortunately, when there is a small gap between σk and σk+1, Z could miss intermediate singular vectors whose values lie between σk+1 and (1 + ϵ)σk+1. This is the case where gap dependent guarantees of classical analysis break down. However, Aq or, for Block Krylov Iteration, some q-degree polynomial in our Krylov subspace, also significantly separates singular values > σk+1 from those < (1 −ϵ)σk+1. Thus, each column of Z at least aligns with A nearly as well as uk+1. So, even if we miss singular values between σk+1 and (1 + ϵ)σk+1, they will be replaced with approximate singular values > (1 −ϵ)σk+1, enough for (3). For Frobenius norm low-rank approximation (1), we prove that the degree to which Z falls outside of the span of A’s top k singular vectors depends on the number of singular values between σk+1 and (1−ϵ)σk+1. These are the values that could be ‘swapped in’ for the true top k singular values. Since their weight counts towards A’s tail, our total loss compared to optimal is at worst ϵ∥A −Ak∥2 F . 5 Implementation and Runtimes For both Algorithm 1 and 2, Π can be replaced by a random sign matrix, or any matrix achieving the guarantee of Lemma 4. Π may also be chosen with p > k columns. In our full paper [25], we discuss in detail how this approach can give improved accuracy. 5.1 Simultaneous Iteration In our implementation we set Z = Q ¯Uk, which is necessary for achieving per vector guarantees for approximate PCA. However, for near optimal low-rank approximation, we can simply set Z = Q. Projecting A to Q ¯Uk is equivalent to projecting to Q as these matrices have the same column spans. Since powering A spreads its singular values, K = (AAT )qAΠ could be poorly conditioned. To improve stability we orthonormalize K after every iteration (or every few iterations). This does not change K’s column span, so it gives an equivalent algorithm in exact arithmetic. 4Algorithm 2 in fact only constructs odd powered terms in K, which is sufficient for our choice of pq(x). 6 Theorem 7 (Simultaneous Iteration Runtime). Algorithm 1 runs in time O nnz(A)k log(d)/ϵ + nk2 log(d)/ϵ . Proof. Computing K requires first multiplying A by Π, which takes O(nnz(A)k) time. Computing AAT i AΠ given AAT i−1 AΠ then takes O(nnz(A)k) time to first multiply our (n × k) matrix by AT and then by A. Reorthogonalizing after each iteration takes O(nk2) time via GramSchmidt. This gives a total runtime of O(nnz(A)kq + nk2q) for computing K. Finding Q takes O(nk2) time. Computing M by multiplying from left to right requires O(nnz(A)k + nk2) time. M’s SVD then requires O(k3) time using classical techniques. Finally, multiplying ¯Uk by Q takes time O(nk2). Setting q = Θ(log d/ϵ) gives the claimed runtime. 5.2 Block Krylov Iteration In the traditional Block Lanczos algorithm, one starts by computing an orthonormal basis for AΠ, the first block in K. Bases for subsequent blocks are computed from previous blocks using a three term recurrence that ensures QT AAT Q is block tridiagonal, with k × k sized blocks [18]. This technique can be useful if qk is large, since it is faster to compute the top singular vectors of a block tridiagonal matrix. However, computing Q using a recurrence can introduce a number of stability issues, and additional steps may be required to ensure that the matrix remains orthogonal [28]. An alternative, uesd in [1], [19], and our Algorithm 2, is to compute K explicitly and then find Q using a QR decomposition. This method does not guarantee that QT AAT Q is block tridiagonal, but avoids stability issues. Furthermore, if qk is small, taking the SVD of QT AAT Q will still be fast and typically dominated by the cost of computing K. As with Simultaneous Iteration, we orthonormalize each block of K after it is computed, avoiding poorly conditioned blocks and giving an equivalent algorithm in exact arithmetic. Theorem 8 (Block Krylov Iteration Runtime). Algorithm 2 runs in time O nnz(A)k log(d)/√ϵ + nk2 log2(d)/ϵ + k3 log3(d)/ϵ3/2 . Proof. Computing K, including reorthogonalization, requires O(nnz(A)kq + nk2q) time. The remaining steps are analogous to those in Simultaneous Iteration except somewhat more costly as we work with a k · q rather than k dimensional subspace. Finding Q takes O(n(kq)2) time. Computing M take O(nnz(A)(kq) + n(kq)2) time and its SVD then requires O((kq)3) time. Finally, multiplying ¯Uk by Q takes time O(nk(kq)). Setting q = Θ(log d/√ϵ) gives the claimed runtime. 6 Experiments We close with several experimental results. A variety of empirical papers, not to mention widespread adoption, already justify the use of randomized SVD algorithms. Prior work focuses in particular on benchmarking Simultaneous Iteration [19, 11] and, due to its improved accuracy over sketch-andsolve approaches, this algorithm is popular in practice [10, 16]. As such, we focus on demonstrating that for many data problems Block Krylov Iteration can offer significantly better convergence. We implement both algorithms in MATLAB using Gaussian random starting matrices with exactly k columns. We explicitly compute K for both algorithms, as described in Section 5, and use reorthonormalization at each iteration to improve stability [34]. We test the algorithms with varying iteration count q on three common datasets, SNAP/AMAZON0302 [22, 23], SNAP/EMAIL-ENRON [22, 35], and 20 NEWSGROUPS [36], computing column principal components in all cases. We plot error vs. iteration count for metrics (1), (2), and (3) in Figure 3. For per vector error (3), we plot the maximum deviation amongst all top k approximate principal components (relative to σk+1). Unsurprisingly, both algorithms obtain very accurate Frobenius norm error, ∥A −ZZT A∥F /∥A − Ak∥F , with very few iterations. This is our intuitively weakest guarantee and, in the presence of a heavy singular value tail, both iterative algorithms will outperform the worst case analysis. On the other hand, for spectral norm low-rank approximation and per vector error, we confirm that Block Krylov Iteration converges much more rapidly than Simultaneous Iteration, as predicted by 7 5 10 15 20 25 0 0.05 0.1 0.15 0.2 0.25 0.3 Iterations q Error ε Block Krylov − Frobenius Error Block Krylov − Spectral Error Block Krylov − Per Vector Error Simult. Iter. − Frobenius Error Simult. Iter. − Spectral Error Simult. Iter. − Per Vector Error (a) SNAP/AMAZON0302, k = 30 5 10 15 20 25 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 Iterations q Error ε Block Krylov − Frobenius Error Block Krylov − Spectral Error Block Krylov − Per Vector Error Simult. Iter. − Frobenius Error Simult. Iter. − Spectral Error Simult. Iter. − Per Vector Error (b) SNAP/EMAIL-ENRON, k = 10 5 10 15 20 25 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Error ε Iterations q Block Krylov − Frobenius Error Block Krylov − Spectral Error Block Krlyov − Per Vector Error Simult. Iter. − Frobenius Error Simult. Iter. − Spectral Error Simult. Iter. − Per Vector Error (c) 20 NEWSGROUPS, k = 20 0 1 2 3 4 5 6 7 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 Runtime (seconds) Error ε Block Krylov − Frobenius Error Block Krylov − Spectral Error Block Krylov − Per Vector Error Simult. Iter. − Frobenius Error Simult. Iter. − Spectral Error Simult. Iter. − Per Vector Error (d) 20 NEWSGROUPS, k = 20, runtime cost Figure 3: Low-rank approximation and per vector error convergence rates for Algorithms 1 and 2. our theoretical analysis. It it often possible to achieve nearly optimal error with < 8 iterations where as getting to within say 1% error with Simultaneous Iteration can take much longer. The final plot in Figure 3 shows error verses runtime for the 11269 × 15088 dimensional 20 NEWSGROUPS dataset. We averaged over 7 trials and ran the experiments on a commodity laptop with 16GB of memory. As predicted, because its additional memory overhead and post-processing costs are small compared to the cost of the large matrix multiplication required for each iteration, Block Krylov Iteration outperforms Simultaneous Iteration for small ϵ. More generally, these results justify the importance of convergence bounds that are independent of singular value gaps. Our analysis in Section 6 of the full paper predicts that, once ϵ is small in comparison to the gap σk σk+1 −1, we should see much more rapid convergence since q will depend on log(1/ϵ) instead of 1/ϵ. However, for Simultaneous Iteration, we do not see this behavior with SNAP/AMAZON0302 and it only just begins to emerge for 20 NEWSGROUPS. While all three datasets have rapid singular value decay, a careful look confirms that their singular value gaps are actually quite small! For example, σk/σk+1 −1 is .004 for SNAP/AMAZON0302 and .011 for 20 NEWSGROUPS, in comparison to .042 for SNAP/EMAIL-ENRON. Accordingly, the frequent claim that singular value gaps can be taken as constant is insufficient, even for small ϵ. References [1] Vladimir Rokhlin, Arthur Szlam, and Mark Tygert. A randomized algorithm for principal component analysis. SIAM Journal on Matrix Analysis and Applications, 31(3):1100–1124, 2009. [2] Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM Review, 53(2):217–288, 2011. [3] Tam´as Sarl´os. Improved approximation algorithms for large matrices via random projections. In Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 2006. [4] Per-Gunnar Martinsson, Vladimir Rokhlin, and Mark Tygert. A randomized algorithm for the approximation of matrices. Technical Report 1361, Yale University, 2006. [5] Kenneth Clarkson and David Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the 45th Annual ACM Symposium on Theory of Computing (STOC), pages 81–90, 2013. [6] Antoine Liutkus. Randomized SVD, 2014. MATLAB Central File Exchange. [7] Daisuke Okanohara. redsvd: RandomizED SVD. https://code.google.com/p/redsvd/, 2010. 8 [8] David Hall et al. ScalaNLP: Breeze. http://www.scalanlp.org/, 2009. [9] IBM Reseach Division, Skylark Team. libskylark: Sketching-based Distributed Matrix Computations for Machine Learning. IBM Corporation, Armonk, NY, 2014. [10] F. Pedregosa et al. Scikit-learn: Machine learning in Python. JMLR, 12:2825–2830, 2011. [11] Arthur Szlam, Yuval Kluger, and Mark Tygert. An implementation of a randomized algorithm for principal component analysis. arXiv:1412.3510, 2014. [12] Zohar Karnin and Edo Liberty. Online PCA with spectral bounds. In Proceedings of the 28th Annual Conference on Computational Learning Theory (COLT), pages 505–509, 2015. [13] RafiWitten and Emmanuel J. Cand`es. Randomized algorithms for low-rank matrix factorizations: Sharp performance bounds. Algorithmica, 31(3):1–18, 2014. [14] Christos Boutsidis, Petros Drineas, and Malik Magdon-Ismail. Near-optimal column-based matrix reconstruction. SIAM Journal on Computing, 43(2):687–717, 2014. [15] David P. Woodruff. Sketching as a tool for numerical linear algebra. Found. Trends in Theoretical Computer Science, 10(1-2):1–157, 2014. [16] Andrew Tulloch. Fast randomized singular value decomposition. http://research.facebook. com/blog/294071574113354/fast-randomized-svd/, 2014. [17] Jane Cullum and W.E. Donath. A block Lanczos algorithm for computing the q algebraically largest eigenvalues and a corresponding eigenspace of large, sparse, real symmetric matrices. In IEEE Conference on Decision and Control including the 13th Symposium on Adaptive Processes, pages 505–509, 1974. [18] Gene Golub and Richard Underwood. The block Lanczos method for computing eigenvalues. Mathematical Software, (3):361–377, 1977. [19] Nathan Halko, Per-Gunnar Martinsson, Yoel Shkolnisky, and Mark Tygert. An algorithm for the principal component analysis of large data sets. SIAM Journal on Scientific Computing, 33(5):2580–2594, 2011. [20] Nathan Halko. Randomized methods for computing low-rank approximations of matrices. PhD thesis, U. of Colorado, 2012. [21] Ming Gu. Subspace iteration randomization and singular value problems. arXiv:1408.2208, 2014. [22] Timothy A. Davis and Yifan Hu. The university of florida sparse matrix collection. ACM Transactions on Mathematical Software, 38(1):1:1–1:25, December 2011. [23] Jure Leskovec, Lada A. Adamic, and Bernardo A. Huberman. The dynamics of viral marketing. ACM Transactions on the Web, 1(1), May 2007. [24] Y. Saad. On the rates of convergence of the Lanczos and the Block-Lanczos methods. SIAM Journal on Numerical Analysis, 17(5):687–706, 1980. [25] Cameron Musco and Christopher Musco. Randomized block Krylov methods for stronger and faster approximate singular value decomposition. arXiv:1504.05477, 2015. [26] Yousef Saad. Numerical Methods for Large Eigenvalue Problems: Revised Edition, volume 66. 2011. [27] Gene Golub, Franklin Luk, and Michael Overton. A block Lanczos method for computing the singular values and corresponding singular vectors of a matrix. ACM Trans. Math. Softw., 7(2):149–169, 1981. [28] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns Hopkins University Press, 3rd edition, 1996. [29] Ren-Cang Li and Lei-Hong Zhang. Convergence of the block Lanczos method for eigenvalue clusters. Numerische Mathematik, 131(1):83–113, 2015. [30] Michael B. Cohen, Sam Elder, Cameron Musco, Christopher Musco, and Madalina Persu. Dimensionality reduction for k-means clustering and low rank approximation. In Proceedings of the 47th Annual ACM Symposium on Theory of Computing (STOC), 2015. [31] Friedrich L. Bauer. Das verfahren der treppeniteration und verwandte verfahren zur l¨osung algebraischer eigenwertprobleme. Zeitschrift f¨ur angewandte Mathematik und Physik ZAMP, 8(3):214–235, 1957. [32] J. Kuczy´nski and H. Wo´zniakowski. Estimating the largest eigenvalue by the power and Lanczos algorithms with a random start. SIAM Journal on Matrix Analysis and Applications, 13(4):1094–1122, 1992. [33] Kin Cheong Sou and Anders Rantzer. On the minimum rank of a generalized matrix approximation problem in the maximum singular value norm. In Proceedings of the 19th International Symposium on Mathematical Theory of Networks and Systems (MTNS), 2010. [34] Per-Gunnar Martinsson, Arthur Szlam, and Mark Tygert. Normalized power iterations for the computation of SVD, 2010. NIPS Workshop on Low-rank Methods for Large-scale Machine Learning. [35] Jure Leskovec, Jon Kleinberg, and Christos Faloutsos. Graphs over time: Densification laws, shrinking diameters and possible explanations. In Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 177–187, 2005. [36] Jason Rennie. 20 newsgroups. http://qwone.com/˜jason/20Newsgroups/, May 2015. 9 | 2015 | 52 |
5,942 | Optimal Testing for Properties of Distributions Jayadev Acharya, Constantinos Daskalakis, Gautam Kamath EECS, MIT {jayadev, costis, g}@mit.edu Abstract Given samples from an unknown discrete distribution p, is it possible to distinguish whether p belongs to some class of distributions C versus p being far from every distribution in C? This fundamental question has received tremendous attention in statistics, focusing primarily on asymptotic analysis, as well as in information theory and theoretical computer science, where the emphasis has been on small sample size and computational complexity. Nevertheless, even for basic properties of discrete distributions such as monotonicity, independence, logconcavity, unimodality, and monotone-hazard rate, the optimal sample complexity is unknown. We provide a general approach via which we obtain sample-optimal and computationally efficient testers for all these distribution families. At the core of our approach is an algorithm which solves the following problem: Given samples from an unknown distribution p, and a known distribution q, are p and q close in χ2-distance, or far in total variation distance? The optimality of our testers is established by providing matching lower bounds, up to constant factors. Finally, a necessary building block for our testers and an important byproduct of our work are the first known computationally efficient proper learners for discrete log-concave, monotone hazard rate distributions. 1 Introduction The quintessential scientific question is whether an unknown object has some property, i.e. whether a model from a specific class fits the object’s observed behavior. If the unknown object is a probability distribution, p, to which we have sample access, we are typically asked to distinguish whether p belongs to some class C or whether it is sufficiently far from it. This question has received tremendous attention in the field of statistics (see, e.g., [1, 2]), where test statistics for important properties such as the ones we consider here have been proposed. Nevertheless, the emphasis has been on asymptotic analysis, characterizing the rates of convergence of test statistics under null hypotheses, as the number of samples tends to infinity. In contrast, we wish to study the following problem in the small sample regime: ⇧(C, "): Given a family of distributions C, some " > 0, and sample access to an unknown distribution p over a discrete support, how many samples are required to distinguish between p 2 C versus dTV(p, C) > "?1 The problem has been studied intensely in the literature on property testing and sublinear algorithms [3, 4, 5], where the emphasis has been on characterizing the optimal tradeoff between p’s support size and the accuracy " in the number of samples. Several results have been obtained, roughly 1We want success probability at least 2/3, which can be boosted to 1 −δ by repeating the test O(log(1/δ)) times and taking the majority. 1 clustering into three groups, where (i) C is the class of monotone distributions over [n], or more generally a poset [6, 7]; (ii) C is the class of independent, or k-wise independent distributions over a hypergrid [8, 9]; and (iii) C contains a single-distribution q, and the problem becomes that of testing whether p equals q or is far from it [8, 10, 11, 13]. With respect to (iii), [13] exactly characterizes the number of samples required to test identity to each distribution q, providing a single tester matching this bound simultaneously for all q. Nevertheless, this tester and its precursors are not applicable to the composite identity testing problem that we consider. If our class C were finite, we could test against each element in the class, albeit this would not necessarily be sample optimal. If our class C were a continuum, we would need tolerant identity testers, which tend to be more expensive in terms of sample complexity [12], and result in substantially suboptimal testers for the classes we consider. Or we could use approaches related to generalized likelihood ratio test, but their behavior is not well-understood in our regime, and optimizing likelihood over our classes becomes computationally intense. Our Contributions We obtain sample-optimal and computationally efficient testers for ⇧(C, ") for the most fundamental shape restrictions to a distribution. Our contributions are the following: 1. For a known distribution q over [n], and sample access to p, we show that distinguishing the cases: (a) whether the χ2-distance between p and q is at most "2/2, versus (b) the `1 distance between p and q is at least 2", requires ⇥(pn/"2) samples. As a corollary, we obtain an alternate argument that shows that identity testing requires ⇥(pn/"2) samples (previously shown in [13]). 2. For the class C = Md n of monotone distributions over [n]d we require an optimal ⇥ ! nd/2/"2" number of samples, where prior work requires ⌦ !pn log n/"6" samples for d = 1 and ˜⌦ ! nd−1/2poly (1/") " for d > 1 [6, 7]. Our results improve the exponent of n with respect to d, shave all logarithmic factors in n, and improve the exponent of " by at least a factor of 2. (a) A useful building block and interesting byproduct of our analysis is extending Birg´e’s oblivious decomposition for single-dimensional monotone distributions [14] to monotone distributions in d ≥1, and to the stronger notion of χ2-distance. See Section C.1. (b) Moreover, we show that O(logd n) samples suffice to learn a monotone distribution over [n]d in χ2-distance. See Lemma 3 for the precise statement. 3. For the class C = ⇧d of product distributions over [n1] ⇥· · · ⇥[nd], our algorithm requires O !! (Q ` n`)1/2 + P ` n` " /"2" samples. We note that a product distribution is one where all marginals are independent, so this is equivalent to testing if a collection of random variables are all independent. In the case where n`’s are large, then the first term dominates, and the sample complexity is O((Q ` n`)1/2 /"2). In particular, when d is a constant and all n`’s are equal to n, we achieve the optimal sample complexity of ⇥(nd/2/"2). To the best of our knowledge, this is the first result for d ≥3, and when d = 2, this improves the previously known complexity from O ! n "6 polylog(n/") " [8, 15], significantly improving the dependence on " and shaving all logarithmic factors. 4. For the classes C = LCDn, C = MHRn and C = Un of log-concave, monotone-hazard-rate and unimodal distributions over [n], we require an optimal ⇥ !pn/"2" number of samples. Our testers for LCDn and C = MHRn are to our knowledge the first for these classes for the low sample regime we are studying—see [16] and its references for statistics literature on the asymptotic regime. Our tester for Un improves the dependence of the sample complexity on " by at least a factor of 2 in the exponent, and shaves all logarithmic factors in n, compared to testers based on testing monotonicity. (a) A useful building block and important byproduct of our analysis are the first computationally efficient algorithms for properly learning log-concave and monotone-hazard-rate distributions, to within " in total variation distance, from poly(1/") samples, independent of the domain size n. See Corollaries 4 and 6. Again, these are the first computationally efficient algorithms to our knowledge in the low sample regime. [17] provide algorithms for density estimation, which are non-proper, i.e. will approximate an unknown distribution from these classes with a distribution that does not belong to these classes. On the other hand, the statistics literature focuses on maximum-likelihood estimation in the asymptotic regime—see e.g. [18] and its references. 2 5. For all the above classes we obtain matching lower bounds, showing that the sample complexity of our testers is optimal with respect to n, " and when applicable d. See Section 8. Our lower bounds are based on extending Paninski’s lower bound for testing uniformity [10]. Our Techniques At the heart of our tester lies a novel use of the χ2 statistic. Naturally, the χ2 and its related `2 statistic have been used in several of the afore-cited results. We propose a new use of the χ2 statistic enabling our optimal sample complexity. The essence of our approach is to first draw a small number of samples (independent of n for log-concave and monotone-hazard-rate distributions and only logarithmic in n for monotone and unimodal distributions) to approximate the unknown distribution p in χ2 distance. If p 2 C, our learner is required to output a distribution q that is O(")-close to C in total variation and O("2)-close to p in χ2 distance. Then some analysis reduces our testing problem to distinguishing the following cases: • p and q are O("2)-close in χ2 distance; this case corresponds to p 2 C. • p and q are ⌦(")-far in total variation distance; this case corresponds to dTV(p, C) > ". We draw a comparison with robust identity testing, in which one must distinguish whether p and q are c1"-close or c2"-far in total variation distance, for constants c2 > c1 > 0. In [12], Valiant and Valiant show that ⌦(n/ log n) samples are required for this problem – a nearly-linear sample complexity, which may be prohibitively large in many settings. In comparison, the problem we study tests for χ2 closeness rather than total variation closeness: a relaxation of the previous problem. However, our tester demonstrates that this relaxation allows us to achieve a substantially sublinear complexity of O(pn/"2). On the other hand, this relaxation is still tight enough to be useful, demonstrated by our application in obtaining sample-optimal testers. We note that while the χ2 statistic for testing hypothesis is prevalent in statistics providing optimal error exponents in the large-sample regime, to the best of our knowledge, in the small-sample regime, modified-versions of the χ2 statistic have only been recently used for closeness-testing in [19, 20, 21] and for testing uniformity of monotone distributions in [22]. In particular, [19] design an unbiased statistic for estimating the χ2 distance between two unknown distributions. Organization In Section 4, we show that a version of the χ2 statistic, appropriately excluding certain elements of the support, is sufficiently well-concentrated to distinguish between the above cases. Moreover, the sample complexity of our algorithm is optimal for most classes. Our base tester is combined with the afore-mentioned extension of Birg´e’s decomposition theorem to test monotone distributions in Section 5 (see Theorem 2 and Corollary 1), and is also used to test independence of distributions in Section 6 (see Theorem 3). In Section 7, we give our results on testing unimodal, log-concave and monotone hazard rate distributions. Naturally, there are several bells and whistles that we need to add to the above skeleton to accommodate all classes of distributions that we are considering. In Remark 1 we mention the additional modifications for these classes. Related Work. For the problems that we study in this paper, we have provided the related works in the previous section along with our contributions. We cannot do justice to the role of shape restrictions of probability distributions in probabilistic modeling and testing. It suffices to say that the classes of distributions that we study are fundamental, motivating extensive literature on their learning and testing [23]. In the recent times, there has been work on shape restricted statistics, pioneered by Jon Wellner, and others. [24, 25] study estimation of monotone and k-monotone densities, and [26, 27] study estimation of log-concave distributions. Due to the sheer volume of literature in statistics in this field, we will restrict ourselves to those already referenced. As we have mentioned, statistics has focused on the asymptotic regime as the number of samples tends to infinity. Instead we are considering the low sample regime and are more stringent about the behavior of our testers, requiring 2-sided guarantees. We want to accept if the unknown distribution is in our class of interest, and also reject if it is far from the class. For this problem, as discussed above, there are few results when C is a whole class of distributions. Closer related to our paper is the line of papers [6, 7, 28] for monotonicity testing, albeit these papers have sub-optimal sample complexity as discussed above. Testing independence of random variables has a long history in statisics [29, 30]. The theoretical computer science community has also considered the problem of 3 testing independence of two random variables [8, 15]. While our results sharpen the case where the variables are over domains of equal size, they demonstrate an interesting asymmetric upper bound when this is not the case. More recently, Acharya and Daskalakis provide optimal testers for the family of Poisson Binomial Distributions [31]. Finally, contemporaneous work of Canonne et al [32] provides a generic algorithm and lower bounds for the single-dimensional families of distributions considered here. We note that their algorithm has a sample complexity which is suboptimal in both n and ", while our algorithms are optimal. Their algorithm also extends to mixtures of these classes, though some of these extensions are not computationally efficient. They also provide a framework for proving lower bounds, giving the optimal bounds for many classes when " is sufficiently large with respect to 1/n. In comparison, we provide these lower bounds unconditionally by modifying Paninski’s construction [10] to suit the classes we consider. 2 Preliminaries We use the following probability distances in our paper. The total variation distance between distributions p and q is dTV(p, q) def = supA |p(A) −q(A)| = 1 2kp−qk1. The χ2-distance between p and q over [n] is defined as χ2(p, q) def = P i2[n] (pi−qi)2 qi . The Kolmogorov distance between two probability measures p and q over an ordered set (e.g., R) with cumulative density functions Fp and Fq is dK(p, q) def = supx2R |Fp(x) −Fq(x)|. Our paper is primarily concerned with testing against classes of distributions, defined formally as: Definition 1. Given " 2 (0, 1] and sample access to a distribution p, an algorithm is said to test a class C if it has the following guarantees: • If p 2 C, the algorithm outputs ACCEPT with probability at least 2/3; • If dTV(p, C) ≥", the algorithm outputs REJECT with probability at least 2/3. We note the following useful relationships between these distances [33]: Proposition 1. dK(p, q)2 dTV(p, q)2 1 4χ2(p, q). Definition 2. An ⌘-effective support of a distribution p is any set S such that p(S) ≥1 −⌘. The flattening of a function f over a subset S is the function ¯f such that ¯fi = p(S)/|S|. Definition 3. Let p be a distribution, and support I1, . . . is a partition of the domain. The flattening of p with respect to I1, . . . is the distribution ¯p which is the flattening of p over the intervals I1, . . .. Poisson Sampling Throughout this paper, we use the standard Poissonization approach. Instead of drawing exactly m samples from a distribution p, we first draw m0 ⇠Poisson(m), and then draw m0 samples from p. As a result, the number of times different elements in the support of p occur in the sample become independent, giving much simpler analyses. In particular, the number of times we will observe domain element i will be distributed as Poisson(mpi), independently for each i. Since Poisson(m) is tightly concentrated around m, this additional flexibility comes only at a sub-constant cost in the sample complexity with an inversely exponential in m, additive increase in the error probability. 3 The Testing Algorithm – An Overview Our algorithm for testing a class C can be decomposed into three steps. Near-proper learning in χ2-distance. Our first step requires learning with very specific guarantees. Given sample access to p 2 C, we wish to output q such that (i) q is close to C in total variation distance, and (ii) p and q are O("2)-close in χ2-distance on an "−effective support2 of p. When 2We also require the algorithm to output a description of an effective support for which this property holds. This requirement can be slightly relaxed, as we show in our results for testing unimodality. 4 p is not in C, we do not guarantee anything about q. From an information theoretic standpoint, this problem is harder than learning the distribution in total variation, since χ2-distance is more restrictive than total variation distance. Nonetheless, for the structured classes we consider, we are able to learn in χ2 by modifying the approaches to learn in total variation. Computation of distance to class. The next step is to see if the hypothesis q is close to the class C or not. Since we have an explicit description of q, this step requires no further samples from p, i.e. it is purely computational. If we find that q is far from the class C, then it must be that p 62 C, as otherwise the guarantees from the previous step would imply that q is close to C. Thus, if it is not, we can terminate the algorithm at this point. χ2-testing. At this point, the previous two steps guarantee that our distribution q is such that: - If p 2 C, then p and q are close in χ2 distance on a (known) effective support of p; - If dTV(p, C) ≥", then p and q are far in total variation distance. We can distinguish between these two cases using O(pn/"2) samples with a simple statistical χ2test, that we describe in Section 4. Using the above three-step approach, our tester, as described in the next section, can directly test monotonicity, log-concavity, and monotone hazard rate. With an extra trick, using Kolmogorov’s max inequality, it can also test unimodality. 4 A Robust χ2-`1 Identity Test Our main result in this section is Theorem 1. Theorem 1. Given " 2 (0, 1], a class of probability distributions C, sample access to a distribution p, and an explicit description of a distribution q, both over [n] with the following properties: Property 1. dTV(q, C) " 2. Property 2. If p 2 C, then χ2(p, q) "2 500. Then there exists an algorithm such that: If p 2 C, it outputs ACCEPT with probability at least 2/3; If dTV(p, C) ≥", it outputs REJECT with probability at least 2/3. The time and sample complexity of this algorithm are O !pn/"2" . Proof. Algorithm 1 describes a χ2 testing procedure that gives the guarantee of the theorem. Algorithm 1 Chi-squared testing algorithm 1: Input: "; an explicit distribution q; (Poisson) m samples from a distribution p, where Ni denotes the number of occurrences of the ith domain element. 2: A {i : qi ≥"2/50n} 3: Z P i2A (Ni−mqi)2−Ni mqi 4: if Z m"2/10 return close 5: else return far In Section A we compute the mean and variance of the statistic Z (defined in Algorithm 1) as: E [Z] = m · X i2A (pi −qi)2 qi = m · χ2(pA, qA), Var [Z] = X i2A 2p2 i q2 i + 4m · pi · (pi −qi)2 q2 i ' (1) where by pA and qA we denote respectively the vectors p and q restricted to the coordinates in A, and we slightly abuse notation when we write χ2(pA, qA), as these do not then correspond to probability distributions. Lemma 1 demonstrates the separation in the means of the statistic Z in the two cases of interest, i.e., p 2 C versus dTV(p, C) ≥", and Lemma 2 shows the separation in the variances in the two cases. These two results are proved in Section B. 5 Lemma 1. If p 2 C, then E [Z] m"2/500. If dTV(p, C) ≥", then E [Z] ≥m"2/5. Lemma 2. Let m ≥20000pn/"2. If p 2 C then Var [Z] 1 500000m2"4. If dTV(p, C) ≥", then Var [Z] 1 100E[Z]2. Assuming Lemmas 1 and 2, Theorem 1 is now a simple application of Chebyshev’s inequality. When p 2 C, we have that E [Z] + p 3 Var [Z] ⇣ 1/500 + p 3/500000 ⌘ m"2 m"2/200. Thus, Chebyshev’s inequality gives Pr ⇥ Z ≥m"2/10 ⇤ Pr ⇥ Z ≥m"2/200 ⇤ Pr h Z −E [Z] ≥ p 3 Var [Z]1/2i 1/3. When dTV(p, C) ≥", E [Z] − p 3 Var [Z] ≥ ⇣ 1 − p 3/100 ⌘ E[Z] ≥3m"2/20. Therefore, Pr ⇥ Z m"2/10 ⇤ Pr ⇥ Z 3m"2/20 ⇤ Pr h Z −E [Z] − p 3 Var [Z]1/2i 1/3. This proves the correctness of Algorithm 1. For the running time, we divide the summation in Z into the elements for which Ni > 0 and Ni = 0. When Ni = 0, the contribution of the term to the summation is mqi, and we can sum them up by subtracting the total probability of all elements appearing at least once from 1. Remark 1. To apply Theorem 1, we need to learn distribution in C and find a q that is O("2)-close in χ2-distance to p. For the class of monotone distributions, we are able to efficiently obtain such a q, which immediately implies sample-optimal learning algorithms for this class. However, for some classes, we may not be able to learn a q with such strong guarantees, and we must consider modifications to our base testing algorithm. For example, for log-concave and monotone hazard rate distributions, we can obtain a distribution q and a set S with the following guarantees: - If p 2 C, then χ2(pS, qS) O("2) and p(S) ≥1 −O("); - If dTV(p, C) ≥", then dTV(p, q) ≥"/2. In this scenario, the tester will simply pretend that the support of p and q is S, ignoring any samples and support elements in [n] \ S. Analysis of this tester is extremely similar to Theorem 1. In particular, we can still show that the statistic Z will be separated in the two cases. When p 2 C, excluding [n] \ S will only reduce Z. On the other hand, when dTV(p, C) ≥", since p(S) ≥1−O("), p and q must still be far on the remaining support, and we can show that Z is still sufficiently large. Therefore, a small modification allows us to handle this case with the same sample complexity of O(pn/"2). For unimodal distributions, we are even unable to identify a large enough subset of the support where the χ2 approximation is guaranteed to be tight. But we can show that there exists a light enough piece of the support (in terms of probability mass under p) that we can exclude to make the χ2 approximation tight. Given that we only use Chebyshev’s inequality to prove the concentration of the test statistic, it would seem that our lack of knowledge of the piece to exclude would involve a union bound and a corresponding increase in the required number of samples. We avoid this through a careful application of Kolmogorov’s max inequality in our setting. See Theorem 7 of Section 7. 5 Testing Monotonicity As the first application of our testing framework, we will demonstrate how to test for monotonicity. Let d ≥1, and i = (i1, . . . , id), j = (j1, . . . , jd) 2 [n]d. We say i < j if il ≥jl for l = 1, . . . , d. A distribution p over [n]d is monotone (decreasing) if for all i < j, pi pj3. We follow the steps in the overview. The learning result we show is as follows (proved in Section C). 3This definition describes monotone non-increasing distributions. By symmetry, identical results hold for monotone non-decreasing distributions. 6 Lemma 3. Let d ≥1. There is an algorithm that takes m = O((d log(n)/"2)d/"2) samples from a distribution p over [n]d, and outputs a distribution q such that if p is monotone, then with probability at least 5/6, χ2(p, q) "2 500. Furthermore, the distance of q to monotone distributions can be computed in time poly(m). This accomplishes the first two steps in the overview. In particular, if the distance of q from monotone distributions is more than "/2, we declare that p is not monotone. Therefore, Property 1 in Theorem 1 is satisfied, and the lemma states that Property 2 holds with probability at least 5/6. We then proceed to the χ2 −`1 test. At this point, we have precisely the guarantees needed to apply Theorem 1 over [n]d, directly implying our main result of this section: Theorem 2. For any d ≥1, there exists an algorithm for testing monotonicity over [n]d with sample complexity O nd/2 "2 + ✓d log n "2 ◆d · 1 "2 ! and time complexity O ! nd/2/"2 + poly(log n, 1/")d" . In particular, this implies the following optimal algorithms for monotonicity testing for all d ≥1: Corollary 1. Fix any d ≥1, and suppose " > pd log n/n1/4. Then there exists an algorithm for testing monotonicity over [n]d with sample complexity O ! nd/2/"2" . We note that the class of monotone distributions is the simplest of the classes we consider. We now consider testing for log-concavity, monotone hazard rate, and unimodality, all of which are much more challenging to test. In particular, these classes require a more sophisticated structural understanding, more complex proper χ2-learning algorithms, and non-trivial modifications to our χ2-tester. We have already given some details on the required adaptations to the tester in Remark 1. Our algorithms for learning these classes use convex programming. One of the main challenges is to enforce log-concavity of the PDF when learning LCDn (respectively, of the CDF when learning MHRn), while simultaneously enforcing closeness in total variation distance. This involves a careful choice of our variables, and we exploit structural properties of the classes to ensure the soundness of particular Taylor approximations. We encourage the reader to refer to the proofs of Theorems 7, 8, and 9 for more details. 6 Testing Independence of Random Variables Let X def = [n1] ⇥. . . ⇥[nd], and let ⇧d be the class of all product distributions over X. Similar to learning monotone distributions in χ2 distance we prove the following result in Section E. Lemma 4. There is an algorithm that takes O ⇣ (Pd `=1 n`)/"2⌘ samples from a distribution p and outputs a q 2 ⇧d such that if p 2 ⇧d, then with probability at least 5/6, χ2(p, q) O("2). The distribution q always satisfies Property 1 since it is in ⇧d, and by this lemma, with probability at least 5/6 satisfies Property 2 in Theorem 1. Therefore, we obtain the following result. Theorem 3. For any d ≥1, there exists an algorithm for testing independence of random variables over [n1] ⇥. . . [nd] with sample and time complexity O ⇣⇣ (Qd `=1 n`)1/2 + Pd `=1 n` ⌘ /"2⌘ . When d = 2 and n1 = n2 = n this improves the result of [8] for testing independence of two random variables. Corollary 2. Testing if two distributions over [n] are independent has sample complexity ⇥(n/"2). 7 Testing Unimodality, Log-Concavity and Monotone Hazard Rate Unimodal distributions over [n] (denoted by Un) are all distributions p for which there exists an i⇤ such that pi is non-decreasing for i i⇤and non-increasing for i ≥i⇤. Log-concave distributions over [n] (denoted by LCDn), is the sub-class of unimodal distributions for which pi−1pi+1 p2 i . 7 Monotone hazard rate (MHR) distributions over [n] (denoted by MHRn), are distributions p with CDF F for which i < j implies fi 1−Fi fj 1−Fj . The following theorem bounds the complexity of testing these classes (for moderate "). Theorem 4. Suppose " > n−1/5. For each of the classes, unimodal, log-concave, and MHR, there exists an algorithm for testing the class over [n] with sample complexity O(pn/"2). This result is a corollary of the specific results for each class, which is proved in the appendix. In particular, a more complete statement for unimodality, log-concavity and monotone-hazard rate, with precise dependence on both n and " is given in Theorems 7, 8 and 9 respectively. We mention some key points about each class, and refer the reader to the respective appendix for further details. Testing Unimodality Using a union bound argument, one can use the results on testing monotonicity to give an algorithm with O !pn log n/"2" samples. However, this is unsatisfactory, since our lower bound (and as we will demonstrate, the true complexity of this problem) is pn/"2. We overcome the logarithmic barrier introduced by the union bound, by employing a non-oblivious decomposition of the domain, and using Kolmogorov’s max-inequality. Testing Log-Concavity The key step is to design an algorithm to learn a log-concave distribution in χ2 distance. We formulate the problem as a linear program in the logarithms of the distribution and show that using O(1/"5) samples, it is possible to output a log-concave distribution that has a χ2 distance at most O("2) from the underlying log-concave distribution. Testing Monotone Hazard Rate For learning MHR distributions in χ2 distance, we formulate a linear program in the logarithms of the CDF and show that using O(log(n/")/"5) samples, it is possible to output a MHR distribution that has a χ2 distance at most O("2) from the underlying MHR distribution. 8 Lower Bounds We now prove sharp lower bounds for the classes of distributions we consider. We show that the example studied by Paninski [10] to prove lower bounds on testing uniformity can be used to prove lower bounds for the classes we consider. They consider a class Q consisting of 2n/2 distributions defined as follows. Without loss of generality assume that n is even. For each of the 2n/2 vectors z0z1 . . . zn/2−1 2 {−1, 1}n/2, define a distribution q 2 Q over [n] as follows. qi = ( (1+z`c") n for i = 2` + 1 (1−z`c") n for i = 2`. (2) Each distribution in Q has a total variation distance c"/2 from Un, the uniform distribution over [n]. By choosing c to be an appropriate constant, Paninski [10] showed that a distribution picked uniformly at random from Q cannot be distinguished from Un with fewer than pn/"2 samples with probability at least 2/3. Suppose C is a class of distributions such that (i) The uniform distribution Un is in C, (ii) For appropriately chosen c, dTV(C, Q) ≥", then testing C is not easier than distinguishing Un from Q. Invoking [10] immediately implies that testing the class C requires ⌦(pn/"2) samples. The lower bounds for all the one dimensional distributions will follow directly from this construction, and for testing monotonicity in higher dimensions, we extend this construction to d ≥1, appropriately. These arguments are proved in Section H, leading to the following lower bounds for testing these classes: Theorem 5. • For any d ≥1, any algorithm for testing monotonicity over [n]d requires ⌦(nd/2/"2) samples. • For d ≥1, testing independence over [n1]⇥· · ·⇥[nd] requires ⌦ ! (n1n2 . . . nd)1/2/"2" samples. • Testing unimodality, log-concavity, or monotone hazard rate over [n] needs ⌦(pn/"2) samples. 8 References [1] R. A. Fisher, Statistical Methods for Research Workers. Edinburgh: Oliver and Boyd, 1925. [2] E. Lehmann and J. Romano, Testing statistical hypotheses. Springer Science & Business Media, 2006. [3] E. Fischer, “The art of uninformed decisions: A primer to property testing,” Science, 2001. [4] R. Rubinfeld, “Sublinear-time algorithms,” in International Congress of Mathematicians, 2006. [5] C. L. Canonne, “A survey on distribution testing: your data is big, but is it blue,” ECCC, 2015. [6] T. Batu, R. Kumar, and R. Rubinfeld, “Sublinear algorithms for testing monotone and unimodal distributions,” in Proceedings of STOC, 2004. [7] A. Bhattacharyya, E. Fischer, R. Rubinfeld, and P. Valiant, “Testing monotonicity of distributions over general partial orders,” in ICS, 2011, pp. 239–252. [8] T. Batu, E. Fischer, L. Fortnow, R. Kumar, R. Rubinfeld, and P. White, “Testing random variables for independence and identity,” in Proceedings of FOCS, 2001. [9] N. Alon, A. Andoni, T. Kaufman, K. Matulef, R. Rubinfeld, and N. Xie, “Testing k-wise and almost k-wise independence,” in Proceedings of STOC, 2007. [10] L. Paninski, “A coincidence-based test for uniformity given very sparsely sampled discrete data.” IEEE Transactions on Information Theory, vol. 54, no. 10, 2008. [11] D. Huang and S. Meyn, “Generalized error exponents for small sample universal hypothesis testing,” IEEE Transactions on Information Theory, vol. 59, no. 12, pp. 8157–8181, Dec 2013. [12] G. Valiant and P. Valiant, “Estimating the unseen: An n/ log n-sample estimator for entropy and support size, shown optimal via new CLTs,” in Proceedings of STOC, 2011. [13] ——, “An automatic inequality prover and instance optimal identity testing,” in FOCS, 2014. [14] L. Birg´e, “Estimating a density under order restrictions: Nonasymptotic minimax risk,” The Annals of Statistics, vol. 15, no. 3, pp. 995–1012, September 1987. [15] R. Levi, D. Ron, and R. Rubinfeld, “Testing properties of collections of distributions,” Theory of Computing, vol. 9, no. 8, pp. 295–347, 2013. [16] P. Hall and I. Van Keilegom, “Testing for monotone increasing hazard rate,” Annals of Statistics, pp. 1109–1137, 2005. [17] S. O. Chan, I. Diakonikolas, R. A. Servedio, and X. Sun, “Learning mixtures of structured distributions over discrete domains,” in Proceedings of SODA, 2013. [18] M. Cule and R. Samworth, “Theoretical properties of the log-concave maximum likelihood estimator of a multidimensional density,” Electronic Journal of Statistics, vol. 4, pp. 254–270, 2010. [19] J. Acharya, H. Das, A. Jafarpour, A. Orlitsky, S. Pan, and A. T. Suresh, “Competitive classification and closeness testing,” in COLT, 2012, pp. 22.1–22.18. [20] S. Chan, I. Diakonikolas, G. Valiant, and P. Valiant, “Optimal algorithms for testing closeness of discrete distributions,” in SODA, 2014, pp. 1193–1203. [21] B. B. Bhattacharya and G. Valiant, “Testing closeness with unequal sized samples,” in NIPS, 2015. [22] J. Acharya, A. Jafarpour, A. Orlitsky, and A. Theertha Suresh, “A competitive test for uniformity of monotone distributions,” in Proceedings of AISTATS, 2013, pp. 57–65. [23] R. E. Barlow, D. J. Bartholomew, J. M. Bremner, and H. D. Brunk, Statistical Inference under Order Restrictions. New York: Wiley, 1972. [24] H. K. Jankowski and J. A. Wellner, “Estimation of a discrete monotone density,” Electronic Journal of Statistics, vol. 3, pp. 1567–1605, 2009. [25] F. Balabdaoui and J. A. Wellner, “Estimation of a k-monotone density: characterizations, consistency and minimax lower bounds,” Statistica Neerlandica, vol. 64, no. 1, pp. 45–70, 2010. [26] F. Balabdaoui, H. Jankowski, and K. Rufibach, “Maximum likelihood estimation and confidence bands for a discrete log-concave distribution,” 2011. [Online]. Available: http://arxiv.org/abs/1107.3904v1 [27] A. Saumard and J. A. Wellner, “Log-concavity and strong log-concavity: a review,” Statistics Surveys, vol. 8, pp. 45–114, 2014. [28] M. Adamaszek, A. Czumaj, and C. Sohler, “Testing monotone continuous distributions on highdimensional real cubes,” in SODA, 2010, pp. 56–65. [29] J. N. Rao and A. J. Scott, “The analysis of categorical data from complex sample surveys,” Journal of the American Statistical Association, vol. 76, no. 374, pp. 221–230, 1981. [30] A. Agresti and M. Kateri, Categorical data analysis. Springer, 2011. [31] J. Acharya and C. Daskalakis, “Testing Poisson Binomial Distributions,” in SODA, 2015, pp. 1829–1840. [32] C. Canonne, I. Diakonikolas, T. Gouleakis, and R. Rubinfeld, “Testing shape restrictions of discrete distributions,” in STACS, 2016. [33] A. L. Gibbs and F. E. Su, “On choosing and bounding probability metrics,” International Statistical Review, vol. 70, no. 3, pp. 419–435, dec 2002. [34] J. Acharya, A. Jafarpour, A. Orlitsky, and A. T. Suresh, “Efficient compression of monotone and m-modal distributions,” in ISIT, 2014. [35] S. Kamath, A. Orlitsky, D. Pichapati, and A. T. Suresh, “On learning distributions from their samples,” in COLT, 2015. [36] P. Massart, “The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality,” The Annals of Probability, vol. 18, no. 3, pp. 1269–1283, 07 1990. 9 | 2015 | 53 |
5,943 | Combinatorial Cascading Bandits Branislav Kveton Adobe Research San Jose, CA kveton@adobe.com Zheng Wen Yahoo Labs Sunnyvale, CA zhengwen@yahoo-inc.com Azin Ashkan Technicolor Research Los Altos, CA azin.ashkan@technicolor.com Csaba Szepesv´ari Department of Computing Science University of Alberta szepesva@cs.ualberta.ca Abstract We propose combinatorial cascading bandits, a class of partial monitoring problems where at each step a learning agent chooses a tuple of ground items subject to constraints and receives a reward if and only if the weights of all chosen items are one. The weights of the items are binary, stochastic, and drawn independently of each other. The agent observes the index of the first chosen item whose weight is zero. This observation model arises in network routing, for instance, where the learning agent may only observe the first link in the routing path which is down, and blocks the path. We propose a UCB-like algorithm for solving our problems, CombCascade; and prove gap-dependent and gap-free upper bounds on its n-step regret. Our proofs build on recent work in stochastic combinatorial semi-bandits but also address two novel challenges of our setting, a non-linear reward function and partial observability. We evaluate CombCascade on two real-world problems and show that it performs well even when our modeling assumptions are violated. We also demonstrate that our setting requires a new learning algorithm. 1 Introduction Combinatorial optimization [16] has many real-world applications. In this work, we study a class of combinatorial optimization problems with a binary objective function that returns one if and only if the weights of all chosen items are one. The weights of the items are binary, stochastic, and drawn independently of each other. Many popular optimization problems can be formulated in our setting. Network routing is a problem of choosing a routing path in a computer network that maximizes the probability that all links in the chosen path are up. Recommendation is a problem of choosing a list of items that minimizes the probability that none of the recommended items are attractive. Both of these problems are closely related and can be solved using similar techniques (Section 2.3). Combinatorial cascading bandits are a novel framework for online learning of the aforementioned problems where the distribution over the weights of items is unknown. Our goal is to maximize the expected cumulative reward of a learning agent in n steps. Our learning problem is challenging for two main reasons. First, the reward function is non-linear in the weights of chosen items. Second, we only observe the index of the first chosen item with a zero weight. This kind of feedback arises frequently in network routing, for instance, where the learning agent may only observe the first link in the routing path which is down, and blocks the path. This feedback model was recently proposed in the so-called cascading bandits [10]. The main difference in our work is that the feasible set can be arbitrary. The feasible set in cascading bandits is a uniform matroid. 1 Stochastic online learning with combinatorial actions has been previously studied with semi-bandit feedback and a linear reward function [8, 11, 12], and its monotone transformation [5]. Established algorithms for multi-armed bandits, such as UCB1 [3], KL-UCB [9], and Thompson sampling [18, 2]; can be usually easily adapted to stochastic combinatorial semi-bandits. However, it is non-trivial to show that the algorithms are statistically efficient, in the sense that their regret matches some lower bound. Kveton et al. [12] recently showed this for CombUCB1, a form of UCB1. Our analysis builds on this recent advance but also addresses two novel challenges of our problem, a non-linear reward function and partial observability. These challenges cannot be addressed straightforwardly based on Kveton et al. [12, 10]. We make multiple contributions. In Section 2, we define the online learning problem of combinatorial cascading bandits and propose CombCascade, a variant of UCB1, for solving it. CombCascade is computationally efficient on any feasible set where a linear function can be optimized efficiently. A minor-looking improvement to the UCB1 upper confidence bound, which exploits the fact that the expected weights of items are bounded by one, is necessary in our analysis. In Section 3, we derive gap-dependent and gap-free upper bounds on the regret of CombCascade, and discuss the tightness of these bounds. In Section 4, we evaluate CombCascade on two practical problems and show that the algorithm performs well even when our modeling assumptions are violated. We also show that CombUCB1 [8, 12] cannot solve some instances of our problem, which highlights the need for a new learning algorithm. 2 Combinatorial Cascading Bandits This section introduces our learning problem, its applications, and also our proposed algorithm. We discuss the computational complexity of the algorithm and then introduce the co-called disjunctive variant of our problem. We denote random variables by boldface letters. The cardinality of set A is |A| and we assume that min ; = +1. The binary and operation is denoted by ^, and the binary or is _. 2.1 Setting We model our online learning problem as a combinatorial cascading bandit. A combinatorial cascading bandit is a tuple B = (E, P, ⇥), where E = {1, . . . , L} is a finite set of L ground items, P is a probability distribution over a binary hypercube {0, 1}E, ⇥✓⇧⇤(E), and: ⇧⇤(E) = {(a1, . . . , ak) : k ≥1, a1, . . . , ak 2 E, ai 6= aj for any i 6= j} is the set of all tuples of distinct items from E. We refer to ⇥as the feasible set and to A 2 ⇥as a feasible solution. We abuse our notation and also treat A as the set of items in solution A. Without loss of generality, we assume that the feasible set ⇥covers the ground set, E = [⇥. Let (wt)n t=1 be an i.i.d. sequence of n weights drawn from distribution P, where wt 2 {0, 1}E. At time t, the learning agent chooses solution At = (at 1, . . . , at |At|) 2 ⇥based on its past observations and then receives a binary reward: rt = min e2At wt(e) = ^ e2At wt(e) as a response to this choice. The reward is one if and only if the weights of all items in At are one. The key step in our solution and its analysis is that the reward can be expressed as rt = f(At, wt), where f : ⇥⇥[0, 1]E ! [0, 1] is a reward function, which is defined as: f(A, w) = Y e2A w(e) , A 2 ⇥, w 2 [0, 1]E . At the end of time t, the agent observes the index of the first item in At whose weight is zero, and +1 if such an item does not exist. We denote this feedback by Ot and define it as: Ot = min # 1 k |At| : wt(at k) = 0 . Note that Ot fully determines the weights of the first min {Ot, |At|} items in At. In particular: wt(at k) = 1{k < Ot} k = 1, . . . , min {Ot, |At|} . (1) 2 Accordingly, we say that item e is observed at time t if e = at k for some 1 k min {Ot, |At|}. Note that the order of items in At affects the feedback Ot but not the reward rt. This differentiates our problem from combinatorial semi-bandits. The goal of our learning agent is to maximize its expected cumulative reward. This is equivalent to minimizing the expected cumulative regret in n steps: R(n) = E [Pn t=1 R(At, wt)] , where R(At, wt) = f(A⇤, wt) −f(At, wt) is the instantaneous stochastic regret of the agent at time t and A⇤= arg max A2⇥E [f(A, w)] is the optimal solution in hindsight of knowing P. For simplicity of exposition, we assume that A⇤, as a set, is unique. A major simplifying assumption, which simplifies our optimization problem and its learning, is that the distribution P is factored: P(w) = Q e2E Pe(w(e)) , (2) where Pe is a Bernoulli distribution with mean ¯w(e). We borrow this assumption from the work of Kveton et al. [10] and it is critical to our results. We would face computational difficulties without it. Under this assumption, the expected reward of solution A 2 ⇥, the probability that the weight of each item in A is one, can be written as E [f(A, w)] = f(A, ¯w), and depends only on the expected weights of individual items in A. It follows that: A⇤= arg max A2⇥f(A, ¯w) . In Section 4, we experiment with two problems that violate our independence assumption. We also discuss implications of this violation. Several interesting online learning problems can be formulated as combinatorial cascading bandits. Consider the problem of learning routing paths in Simple Mail Transfer Protocol (SMTP) that maximize the probability of e-mail delivery. The ground set in this problem are all links in the network and the feasible set are all routing paths. At time t, the learning agent chooses routing path At and observes if the e-mail is delivered. If the e-mail is not delivered, the agent observes the first link in the routing path which is down. This kind of information is available in SMTP. The weight of item e at time t is an indicator of link e being up at time t. The independence assumption in (2) requires that all links fail independently. This assumption is common in the existing network routing models [6]. We return to the problem of network routing in Section 4.2. 2.2 CombCascade Algorithm Our proposed algorithm, CombCascade, is described in Algorithm 1. This algorithm belongs to the family of UCB algorithms. At time t, CombCascade operates in three stages. First, it computes the upper confidence bounds (UCBs) Ut 2 [0, 1]E on the expected weights of all items in E. The UCB of item e at time t is defined as: Ut(e) = min # ˆwTt−1(e)(e) + ct−1,Tt−1(e), 1 , (3) where ˆws(e) is the average of s observed weights of item e, Tt(e) is the number of times that item e is observed in t steps, and ct,s = p (1.5 log t)/s is the radius of a confidence interval around ˆws(e) after t steps such that ¯w(e) 2 [ ˆws(e) −ct,s, ˆws(e) + ct,s] holds with a high probability. After the UCBs are computed, CombCascade chooses the optimal solution with respect to these UCBs: At = arg max A2⇥f(A, Ut) . Finally, CombCascade observes Ot and updates its estimates of the expected weights based on the weights of the observed items in (1), for all items at k such that k Ot. For simplicity of exposition, we assume that CombCascade is initialized by one sample w0 ⇠P. If w0 is unavailable, we can formulate the problem of obtaining w0 as an optimization problem on ⇥ with a linear objective [12]. The initialization procedure of Kveton et al. [12] tracks observed items and adaptively chooses solutions with the maximum number of unobserved items. This approach is computationally efficient on any feasible set ⇥where a linear function can be optimized efficiently. CombCascade has two attractive properties. First, the algorithm is computationally efficient, in the sense that At = arg max A2⇥ P e2A log(Ut(e)) is the problem of maximizing a linear function on 3 Algorithm 1 CombCascade for combinatorial cascading bandits. // Initialization Observe w0 ⇠P 8e 2 E : T0(e) 1 8e 2 E : ˆw1(e) w0(e) for all t = 1, . . . , n do // Compute UCBs 8e 2 E : Ut(e) = min # ˆwTt−1(e)(e) + ct−1,Tt−1(e), 1 // Solve the optimization problem and get feedback At arg max A2⇥f(A, Ut) Observe Ot 2 {1, . . . , |At| , +1} // Update statistics 8e 2 E : Tt(e) Tt−1(e) for all k = 1, . . . , min {Ot, |At|} do e at k Tt(e) Tt(e) + 1 ˆwTt(e)(e) Tt−1(e) ˆwTt−1(e)(e) + 1{k < Ot} Tt(e) ⇥. This problem can be solved efficiently for various feasible sets ⇥, such as matroids, matchings, and paths. Second, CombCascade is sample efficient because the UCB of solution A, f(A, Ut), is a product of the UCBs of all items in A, which are estimated separately. The regret of CombCascade does not depend on |⇥| and is polynomial in all other quantities of interest. 2.3 Disjunctive Objective Our reward model is conjuctive, the reward is one if and only if the weights of all chosen items are one. A natural alternative is a disjunctive model rt = maxe2At wt(e) = W e2At wt(e), the reward is one if the weight of any item in At is one. This model arises in recommender systems, where the recommender is rewarded when the user is satisfied with any recommended item. The feedback Ot is the index of the first item in At whose weight is one, as in cascading bandits [10]. Let f_ : ⇥⇥[0, 1]E ! [0, 1] be a reward function, which is defined as f_(A, w) = 1 −Q e2A(1 − w(e)). Then under the independence assumption in (2), E [f_(A, w)] = f_(A, ¯w) and: A⇤= arg max A2⇥ f_(A, ¯w) = arg min A2⇥ Y e2A (1 −¯w(e)) = arg min A2⇥ f(A, 1 −¯w) . Therefore, A⇤can be learned by a variant of CombCascade where the observations are 1 −wt and each UCB Ut(e) is substituted with a lower confidence bound (LCB) on 1 −¯w(e): Lt(e) = max # 1 −ˆwTt−1(e)(e) −ct−1,Tt−1(e), 0 . Let R(At, wt) = f(At, 1 −wt) −f(A⇤, 1 −wt) be the instantaneous stochastic regret at time t. Then we can bound the regret of CombCascade as in Theorems 1 and 2. The only difference is that ∆e,min and f ⇤are redefined as: ∆e,min = minA2⇥:e2A,∆A>0 f(A, 1 −¯w) −f(A⇤, 1 −¯w) , f ⇤= f(A⇤, 1 −¯w) . 3 Analysis We prove gap-dependent and gap-free upper bounds on the regret of CombCascade in Section 3.1. We discuss these bounds in Section 3.2. 3.1 Upper Bounds We define the suboptimality gap of solution A = (a1, . . . , a|A|) as ∆A = f(A⇤, ¯w) −f(A, ¯w) and the probability that all items in A are observed as pA = Q|A|−1 k=1 ¯w(ak). For convenience, we define 4 shorthands f ⇤= f(A⇤, ¯w) and p⇤= pA⇤. Let ˜E = E \ A⇤be the set of suboptimal items, the items that are not in A⇤. Then the minimum gap associated with suboptimal item e 2 ˜E is: ∆e,min = f(A⇤, ¯w) −maxA2⇥:e2A,∆A>0 f(A, ¯w) . Let K = max {|A| : A 2 ⇥} be the maximum number of items in any solution and f ⇤> 0. Then the regret of CombCascade is bounded as follows. Theorem 1. The regret of CombCascade is bounded as R(n) K f ⇤ X e2 ˜ E 4272 ∆e,min log n + ⇡2 3 L. Proof. The proof is in Appendix A. The main idea is to reduce our analysis to that of CombUCB1 in stochastic combinatorial semi-bandits [12]. This reduction is challenging for two reasons. First, our reward function is non-linear in the weights of chosen items. Second, we only observe some of the chosen items. Our analysis can be trivially reduced to semi-bandits by conditioning on the event of observing all items. In particular, let Ht = (A1, O1, . . . , At−1, Ot−1, At) be the history of CombCascade up to choosing solution At, the first t −1 observations and t actions. Then we can express the expected regret at time t conditioned on Ht as: E [R(At, wt) | Ht] = E [∆At(1/pAt)1{∆At > 0, Ot ≥|At|} | Ht] and analyze our problem under the assumption that all items in At are observed. This reduction is problematic because the probability pAt can be low, and as a result we get a loose regret bound. We address this issue by formalizing the following insight into our problem. When f(A, ¯w) ⌧f ⇤, CombCascade can distinguish A from A⇤without learning the expected weights of all items in A. In particular, CombCascade acts implicitly on the prefixes of suboptimal solutions, and we choose them in our analysis such that the probability of observing all items in the prefixes is “close” to f ⇤, and the gaps are “close” to those of the original solutions. Lemma 1. Let A = (a1, . . . , a|A|) 2 ⇥be a feasible solution and Bk = (a1, . . . , ak) be a prefix of k |A| items of A. Then k can be set such that ∆Bk ≥1 2∆A and pBk ≥1 2f ⇤. Then we count the number of times that the prefixes can be chosen instead of A⇤when all items in the prefixes are observed. The last remaining issue is that f(A, Ut) is non-linear in the confidence radii of the items in A. Therefore, we bound it from above based on the following lemma. Lemma 2. Let 0 p1, . . . , pK 1 and u1, . . . , uK ≥0. Then: QK k=1 min {pk + uk, 1} QK k=1 pk + PK k=1 uk . This bound is tight when p1, . . . , pK = 1 and u1, . . . , uK = 0. The rest of our analysis is along the lines of Theorem 5 in Kveton et al. [12]. We can achieve linear dependency on K, in exchange for a multiplicative factor of 534 in our upper bound. We also prove the following gap-free bound. Theorem 2. The regret of CombCascade is bounded as R(n) 131 s KLn log n f ⇤ + ⇡2 3 L. Proof. The proof is in Appendix B. The key idea is to decompose the regret of CombCascade into two parts, where the gaps ∆At are at most ✏and larger than ✏. We analyze each part separately and then set ✏to get the desired result. 3.2 Discussion In Section 3.1, we prove two upper bounds on the n-step regret of CombCascade: Theorem 1: O(KL(1/f ⇤)(1/∆) log n) , Theorem 2: O( p KL(1/f ⇤)n log n) , where ∆= mine2 ˜ E ∆e,min. These bounds do not depend on the total number of feasible solutions |⇥| and are polynomial in any other quantity of interest. The bounds match, up to O(1/f ⇤) factors, 5 Step n 2k 4k 6k 8k 10k Regret 0 20 40 60 80 7w = (0:4; 0:4; 0:2; 0:2) CombCascade CombUCB1 Step n 2k 4k 6k 8k 10k Regret 0 100 200 300 400 500 7w = (0:4; 0:4; 0:9; 0:1) Step n 2k 4k 6k 8k 10k Regret 0 20 40 60 80 100 7w = (0:4; 0:4; 0:3; 0:3) Figure 1: The regret of CombCascade and CombUCB1 in the synthetic experiment (Section 4.1). The results are averaged over 100 runs. the upper bounds of CombUCB1 in stochastic combinatorial semi-bandits [12]. Since CombCascade receives less feedback than CombUCB1, this is rather surprising and unexpected. The upper bounds of Kveton et al. [12] are known to be tight up to polylogarithmic factors. We believe that our upper bounds are also tight in the setting similar to Kveton et al. [12], where the expected weight of each item is close to 1 and likely to be observed. The assumption that f ⇤is large is often reasonable. In network routing, the optimal routing path is likely to be reliable. In recommender systems, the optimal recommended list often does not satisfy a reasonably large fraction of users. 4 Experiments We evaluate CombCascade in three experiments. In Section 4.1, we compare it to CombUCB1 [12], a state-of-the-art algorithm for stochastic combinatorial semi-bandits with a linear reward function. This experiment shows that CombUCB1 cannot solve all instances of our problem, which highlights the need for a new learning algorithm. It also shows the limitations of CombCascade. We evaluate CombCascade on two real-world problems in Sections 4.2 and 4.3. 4.1 Synthetic In the first experiment, we compare CombCascade to CombUCB1 [12] on a synthetic problem. This problem is a combinatorial cascading bandit with L = 4 items and ⇥= {(1, 2), (3, 4)}. CombUCB1 is a popular algorithm for stochastic combinatorial semi-bandits with a linear reward function. We approximate maxA2⇥f(A, w) by minA2⇥ P e2A(1 −w(e)). This approximation is motivated by the fact that f(A, w) = Q e2A w(e) ⇡1 −P e2A(1 −w(e)) as mine2E w(e) ! 1. We update the estimates of ¯w in CombUCB1 as in CombCascade, based on the weights of the observed items in (1). We experiment with three different settings of ¯w and report our results in Figure 1. The settings of ¯w are reported in our plots. We assume that wt(e) are distributed independently, except for the last plot where wt(3) = wt(4). Our plots represent three common scenarios that we encountered in our experiments. In the first plot, arg max A2⇥f(A, ¯w) = arg min A2⇥ P e2A(1 −¯w(e)). In this case, both CombCascade and CombUCB1 can learn A⇤. The regret of CombCascade is slightly lower than that of CombUCB1. In the second plot, arg max A2⇥f(A, ¯w) 6= arg min A2⇥ P e2A(1 −¯w(e)). In this case, CombUCB1 cannot learn A⇤and therefore suffers linear regret. In the third plot, we violate our modeling assumptions. Perhaps surprisingly, CombCascade can still learn the optimal solution A⇤, although it suffers higher regret than CombUCB1. 4.2 Network Routing In the second experiment, we evaluate CombCascade on a problem of network routing. We experiment with six networks from the RocketFuel dataset [17], which are described in Figure 2a. Our learning problem is formulated as follows. The ground set E are the links in the network. The feasible set ⇥are all paths in the network. At time t, we generate a random pair of starting and end nodes, and the learning agent chooses a routing path between these nodes. The goal of the agent is to maximizes the probability that all links in the path are up. The feedback is the index of the first link in the path which is down. The weight of link e at time t, wt(e), is an indicator of link e being 6 Network Nodes Links 1221 108 153 1239 315 972 1755 87 161 3257 161 328 3967 79 147 6461 141 374 Step n 60k 120k 180k 240k 300k Regret 0 2k 4k 6k 8k 1221 1755 3967 Step n 60k 120k 180k 240k 300k Regret 0 10k 20k 30k 1239 3257 6461 (a) (b) Figure 2: a. The description of six networks from our network routing experiment (Section 4.2). b. The n-step regret of CombCascade in these networks. The results are averaged over 50 runs. up at time t. We model wt(e) as an independent Bernoulli random variable wt(e) ⇠B( ¯w(e)) with mean ¯w(e) = 0.7 + 0.2 local(e), where local(e) is an indicator of link e being local. We say that the link is local when its expected latency is at most 1 millisecond. About a half of the links in our networks are local. To summarize, the local links are up with probability 0.9; and are more reliable than the global links, which are up only with probability 0.7. Our results are reported in Figure 2b. We observe that the n-step regret of CombCascade flattens as time n increases. This means that CombCascade learns near-optimal policies in all networks. 4.3 Diverse Recommendations In our last experiment, we evaluate CombCascade on a problem of diverse recommendations. This problem is motivated by on-demand media streaming services like Netflix, which often recommend groups of movies, such as “Popular on Netflix” and “Dramas”. We experiment with the MovieLens dataset [13] from March 2015. The dataset contains 138k people who assigned 20M ratings to 27k movies between January 1995 and March 2015. Our learning problem is formulated as follows. The ground set E are 200 movies from our dataset: 25 most rated animated movies, 75 random animated movies, 25 most rated non-animated movies, and 75 random non-animated movies. The feasible set ⇥are all K-permutations of E where K/2 movies are animated. The weight of item e at time t, wt(e), indicates that item e attracts the user at time t. We assume that wt(e) = 1 if and only if the user rated item e in our dataset. This indicates that the user watched movie e at some point in time, perhaps because the movie was attractive. The user at time t is drawn randomly from our pool of users. The goal of the learning agent is to learn a list of items A⇤= arg max A2⇥E [f_(A, w)] that maximizes the probability that at least one item is attractive. The feedback is the index of the first attractive item in the list (Section 2.3). We would like to point out that our modeling assumptions are violated in this experiment. In particular, wt(e) are correlated across items e because the users do not rate movies independently. The result is that A⇤6= arg max A2⇥f_(A, ¯w). It is NP-hard to compute A⇤. However, E [f_(A, w)] is submodular and monotone in A, and therefore a (1 −1/e) approximation to A⇤can be computed greedily. We denote this approximation by A⇤and show it for K = 8 in Figure 3a. Our results are reported in Figure 3b. Similarly to Figure 2b, the n-step regret of CombCascade is a concave function of time n for all studied K. This indicates that CombCascade solutions improve over time. We note that the regret does not flatten as in Figure 2b. The reason is that CombCascade does not learn A⇤. Nevertheless, it performs well and we expect comparably good performance in other domains where our modeling assumptions are not satisfied. Our current theory cannot explain this behavior and we leave it for future work. 5 Related Work Our work generalizes cascading bandits of Kveton et al. [10] to arbitrary combinatorial constraints. The feasible set in cascading bandits is a uniform matroid, any list of K items out of L is feasible. Our generalization significantly expands the applicability of the original model and we demonstrate this on two novel real-world problems (Section 4). Our work also extends stochastic combinatorial semi-bandits with a linear reward function [8, 11, 12] to the cascade model of feedback. A similar model to cascading bandits was recently studied by Combes et al. [7]. 7 Movie title Animation Pulp Fiction No Forrest Gump No Independence Day No Shawshank Redemption No Toy Story Yes Shrek Yes Who Framed Roger Rabbit? Yes Aladdin Yes Step n 20k 40k 60k 80k 100k Regret 0 2k 4k 6k 8k K = 8 K = 12 K = 16 (a) (b) Figure 3: a. The optimal list of 8 movies in the diverse recommendations experiment (Section 4.3). b. The n-step regret of CombCascade in this experiment. The results are averaged over 50 runs. Our generalization is significant for two reasons. First, CombCascade is a novel learning algorithm. CombUCB1 [12] chooses solutions with the largest sum of the UCBs. CascadeUCB1 [10] chooses K items out of L with the largest UCBs. CombCascade chooses solutions with the largest product of the UCBs. All three algorithms can find the optimal solution in cascading bandits. However, when the feasible set is not a matroid, it is critical to maximize the product of the UCBs. CombUCB1 may learn a suboptimal solution in this setting and we illustrate this in Section 4.1. Second, our analysis is novel. The proof of Theorem 1 is different from those of Theorems 2 and 3 in Kveton et al. [10]. These proofs are based on counting the number of times that each suboptimal item is chosen instead of any optimal item. They can be only applied to special feasible sets, such a matroid, because they require that the items in the feasible solutions are exchangeable. We build on the recent work of Kveton et al. [12] to achieve linear dependency on K in Theorem 1. The rest of our analysis is novel. Our problem is a partial monitoring problem where some of the chosen items may be unobserved. Agrawal et al. [1] and Bartok et al. [4] studied partial monitoring problems and proposed learning algorithms for solving them. These algorithms are impractical in our setting. As an example, if we formulate our problem as in Bartok et al. [4], we get |⇥| actions and 2L unobserved outcomes; and the learning algorithm reasons over |⇥|2 pairs of actions and requires O(2L) space. Lin et al. [15] also studied combinatorial partial monitoring. Their feedback is a linear function of the weights of chosen items. Our feedback is a non-linear function of the weights. Our reward function is non-linear in unknown parameters. Chen et al. [5] studied stochastic combinatorial semi-bandits with a non-linear reward function, which is a known monotone function of an unknown linear function. The feedback in Chen et al. [5] is semi-bandit, which is more informative than in our work. Le et al. [14] studied a network optimization problem where the reward function is a non-linear function of observations. 6 Conclusions We propose combinatorial cascading bandits, a class of stochastic partial monitoring problems that can model many practical problems, such as learning of a routing path in an unreliable communication network that maximizes the probability of packet delivery, and learning to recommend a list of attractive items. We propose a practical UCB-like algorithm for our problems, CombCascade, and prove upper bounds on its regret. We evaluate CombCascade on two real-world problems and show that it performs well even when our modeling assumptions are violated. Our results and analysis apply to any combinatorial action set, and therefore are quite general. The strongest assumption in our work is that the weights of items are distributed independently of each other. This assumption is critical and hard to eliminate (Section 2.1). Nevertheless, it can be easily relaxed to conditional independence given the features of items, along the lines of Wen et al. [19]. We leave this for future work. From the theoretical point of view, we want to derive a lower bound on the n-step regret in combinatorial cascading bandits, and show that the factor of f ⇤in Theorems 1 and 2 is intrinsic. 8 References [1] Rajeev Agrawal, Demosthenis Teneketzis, and Venkatachalam Anantharam. Asymptotically efficient adaptive allocation schemes for controlled i.i.d. processes: Finite parameter space. IEEE Transactions on Automatic Control, 34(3):258–267, 1989. [2] Shipra Agrawal and Navin Goyal. Analysis of Thompson sampling for the multi-armed bandit problem. In Proceeding of the 25th Annual Conference on Learning Theory, pages 39.1–39.26, 2012. [3] Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine Learning, 47:235–256, 2002. [4] Gabor Bartok, Navid Zolghadr, and Csaba Szepesvari. An adaptive algorithm for finite stochastic partial monitoring. In Proceedings of the 29th International Conference on Machine Learning, 2012. [5] Wei Chen, Yajun Wang, and Yang Yuan. Combinatorial multi-armed bandit: General framework, results and applications. In Proceedings of the 30th International Conference on Machine Learning, pages 151–159, 2013. [6] Baek-Young Choi, Sue Moon, Zhi-Li Zhang, Konstantina Papagiannaki, and Christophe Diot. Analysis of point-to-point packet delay in an operational network. In Proceedings of the 23rd Annual Joint Conference of the IEEE Computer and Communications Societies, 2004. [7] Richard Combes, Stefan Magureanu, Alexandre Proutiere, and Cyrille Laroche. Learning to rank: Regret lower bounds and efficient algorithms. In Proceedings of the 2015 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems, 2015. [8] Yi Gai, Bhaskar Krishnamachari, and Rahul Jain. Combinatorial network optimization with unknown variables: Multi-armed bandits with linear rewards and individual observations. IEEE/ACM Transactions on Networking, 20(5):1466–1478, 2012. [9] Aurelien Garivier and Olivier Cappe. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Proceeding of the 24th Annual Conference on Learning Theory, pages 359– 376, 2011. [10] Branislav Kveton, Csaba Szepesvari, Zheng Wen, and Azin Ashkan. Cascading bandits: Learning to rank in the cascade model. In Proceedings of the 32nd International Conference on Machine Learning, 2015. [11] Branislav Kveton, Zheng Wen, Azin Ashkan, Hoda Eydgahi, and Brian Eriksson. Matroid bandits: Fast combinatorial optimization with learning. In Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence, pages 420–429, 2014. [12] Branislav Kveton, Zheng Wen, Azin Ashkan, and Csaba Szepesvari. Tight regret bounds for stochastic combinatorial semi-bandits. In Proceedings of the 18th International Conference on Artificial Intelligence and Statistics, 2015. [13] Shyong Lam and Jon Herlocker. MovieLens Dataset. http://grouplens.org/datasets/movielens/, 2015. [14] Thanh Le, Csaba Szepesvari, and Rong Zheng. Sequential learning for multi-channel wireless network monitoring with channel switching costs. IEEE Transactions on Signal Processing, 62(22):5919–5929, 2014. [15] Tian Lin, Bruno Abrahao, Robert Kleinberg, John Lui, and Wei Chen. Combinatorial partial monitoring game with linear feedback and its applications. In Proceedings of the 31st International Conference on Machine Learning, pages 901–909, 2014. [16] Christos Papadimitriou and Kenneth Steiglitz. Combinatorial Optimization. Dover Publications, Mineola, NY, 1998. [17] Neil Spring, Ratul Mahajan, and David Wetherall. Measuring ISP topologies with Rocketfuel. IEEE / ACM Transactions on Networking, 12(1):2–16, 2004. [18] William. R. Thompson. On the likelihood that one unknown probability exceeds another in view of the evidence of two samples. Biometrika, 25(3-4):285–294, 1933. [19] Zheng Wen, Branislav Kveton, and Azin Ashkan. Efficient learning in large-scale combinatorial semi-bandits. In Proceedings of the 32nd International Conference on Machine Learning, 2015. 9 | 2015 | 54 |
5,944 | Probabilistic Curve Learning: Coulomb Repulsion and the Electrostatic Gaussian Process Ye Wang Department of Statistics Duke University Durham, NC, USA, 27705 eric.ye.wang@duke.edu David Dunson Department of Statistics Duke University Durham, NC, USA, 27705 dunson@stat.duke.edu Abstract Learning of low dimensional structure in multidimensional data is a canonical problem in machine learning. One common approach is to suppose that the observed data are close to a lower-dimensional smooth manifold. There are a rich variety of manifold learning methods available, which allow mapping of data points to the manifold. However, there is a clear lack of probabilistic methods that allow learning of the manifold along with the generative distribution of the observed data. The best attempt is the Gaussian process latent variable model (GP-LVM), but identifiability issues lead to poor performance. We solve these issues by proposing a novel Coulomb repulsive process (Corp) for locations of points on the manifold, inspired by physical models of electrostatic interactions among particles. Combining this process with a GP prior for the mapping function yields a novel electrostatic GP (electroGP) process. Focusing on the simple case of a one-dimensional manifold, we develop efficient inference algorithms, and illustrate substantially improved performance in a variety of experiments including filling in missing frames in video. 1 Introduction There is broad interest in learning and exploiting lower-dimensional structure in high-dimensional data. A canonical case is when the low dimensional structure corresponds to a p-dimensional smooth Riemannian manifold M embedded in the d-dimensional ambient space Y of the observed data yyy. Assuming that the observed data are close to M, it becomes of substantial interest to learn M along with the mapping µ from M Ñ Y. This allows better data visualization and for one to exploit the lower-dimensional structure to combat the curse of dimensionality in developing efficient machine learning algorithms for a variety of tasks. The current literature on manifold learning focuses on estimating the coordinates xxx P M corresponding to yyy by optimization, finding xxx’s on the manifold M that preserve distances between the corresponding yyy’s in Y. There are many such methods, including Isomap [1], locally-linear embedding [2] and Laplacian eigenmaps [3]. Such methods have seen broad use, but have some clear limitations relative to probabilistic manifold learning approaches, which allow explicit learning of M, the mapping µ and the distribution of yyy. There has been some considerable focus on probabilistic models, which would seem to allow learning of M and µ. Two notable examples are mixtures of factor analyzers (MFA) [4, 5] and Gaussian process latent variable models (GP-LVM) [6]. Bayesian GP-LVM [7] is a Bayesian formulation of GP-LVM which automatically learns the intrinsic dimension p and handles missing data. Such approaches are useful in exploiting lower-dimensional structure in estimating the distribution of yyy, but unfortunately have critical problems in terms of reliable estimation of the manifold and mapping 1 function. MFA is not smooth in approximating the manifold with a collage of lower dimensional hyper-planes, and hence we focus further discussion on Bayesian GP-LVM. Similar problems occur for MFA and other probabilistic manifold learning methods. In general form for the ith data vector, Bayesian GP-LVM lets yyyi “ µpxxxiq ` ϵϵϵi, with µ assigned a Gaussian process prior, xxxi generated from a pre-specified Gaussian or uniform distribution over a p-dimensional space, and the residual ϵϵϵi drawn from a d-dimensional Gaussian centered on zero with diagonal or spherical covariance. While this model seems appropriate to manifold learning, identifiability problems lead to extremely poor performance in estimating M and µ. To give an intuition for the root cause of the problem, consider the case in which xxxi are drawn independently from a uniform distribution over r0, 1sp. The model is so flexible that we could fit the training data yyyi, for i “ 1, . . . , n, just as well if we did not use the entire hypercube but just placed all the xxxi values in a small subset of r0, 1sp. The uniform prior will not discourage this tendency to not spread out the latent coordinates, which unfortunately has disasterous consequences illustrated in our experiments. The structure of the model is just too flexible, and further constraints are needed. Replacing the uniform with a standard Gaussian does not solve the problem. Constrained likelihood methods [8, 9] mitigate the issue to some extent, but do not correspond to a proper Bayesian generative model. To make the problem more tractable, we focus on the case in which M is a one-dimensional smooth compact manifold. Assume yyyi “ µµµpxiq ` ϵϵϵi, with ϵϵϵi Gaussian noise, and µµµ : p0, 1q ÞÑ M a smooth mapping such that µjp¨q P C8 for j “ 1, . . . , d, where µµµpxq “ pµ1pxq, . . . , µdpxqq. We focus on finding a good estimate of µµµ, and hence the manifold, via a probabilistic learning framework. We refer to this problem as probabilistic curve learning (PCL) motivated by the principal curve literature [10]. PCL differs substantially from the principal curve learning problem, which seeks to estimate a non-linear curve through the data, which may be very different from the true manifold. Our proposed approach builds on GP-LVM; in particular, our primary innovation is to generate the latent coordinates xxxi from a novel repulsive process. There is an interesting literature on repulsive point process modeling ranging from various Matern processes [11] to the determinantal point process (DPP) [12]. In our very different context, these processes lead to unnecessary complexity — computationally and otherwise — and we propose a new Coulomb repulsive process (Corp) motivated by Coulomb’s law of electrostatic interaction between electrically charged particles. Using Corp for the latent positions has the effect of strongly favoring spread out locations on the manifold, effectively solving the identifiability problem mentioned above for the GP-LVM. We refer to the GP with Corp on the latent positions as an electrostatic GP (electroGP). The remainder of the paper is organized as follows. The Coulomb repulsive process is proposed in § 2 and the electroGP is presented in § 3 with a comparison between electroGP and GP-LVM demonstrated via simulations. The performance is further evaluated via real world datasets in § 4. A discussion is reported in § 5. 2 Coulomb repulsive process 2.1 Formulation Definition 1. A univariate process is a Coulomb repulsive process (Corp) if and only if for every finite set of indices t1, . . . , tk in the index set N`, Xt1 „ unifp0, 1q, ppXti|Xt1, . . . , Xti´1q9Πi´1 j“1 sin2r ` πXti ´ πXtj ˘ 1XtiPr0,1s, i ą 1, (1) where r ą 0 is the repulsive parameter. The process is denoted as Xt „ Corpprq. The process is named by its analogy in electrostatic physics where by Coulomb law, two electrostatic positive charges will repel each other by a force proportional to the reciprocal of their square distance. Letting dpx, yq “ sin |πx ´ πy|, the above conditional probability of Xti given Xtj is proportional to d2rpXti, Xtjq, shrinking the probability exponentially fast as two states get closer to each other. Note that the periodicity of the sine function eliminates the edges of r0, 1s, making the electrostatic energy field homogeneous everywhere on r0, 1s. Several observations related to Kolmogorov extension theorem can be made immediately, ensuring Corp to be well defined. Firstly, the conditional density defined in (1) is positive and integrable, 2 Figure 1: Each facet consists of 5 rows, with each row representing an 1-dimensional scatterplot of a random realization of Corp under certain n and r. since Xt’s are constrained in a compact interval, and sin2rp¨q is positive and bounded. Hence, the finite distributions are well defined. Secondly, the joint finite p.d.f. for Xt1, . . . , Xtk can be derived as ppXt1, . . . , Xtkq9Πiąj sin2r ` πXti ´ πXtj ˘ . (2) As can be easily seen, any permutation of t1, . . . , tk will result in the same joint finite distribution, hence this finite distribution is exchangeable. Thirdly, it can be easily checked that for any finite set of indices t1, . . . , tk`m, ppXt1, . . . , Xtkq “ ż 1 0 . . . ż 1 0 ppXt1, . . . , Xtk, Xtk`1, . . . , Xtk`mqdXtk`1 . . . dXtk`m, by observing that ppXt1, . . . , Xtk, Xtk`1, . . . , Xtk`mq “ ppXt1, . . . , XtkqΠm j“1ppXtk`j|Xt1, . . . , Xtk`j´1q. 2.2 Properties Assuming Xt, t P N` is a realization from Corp, then the following lemmas hold. Lemma 1. For any n P N`, any 1 ď i ă n and any ϵ ą 0, we have ppXn P BpXi, ϵq|X1, . . . , Xn´1q ă 2π2ϵ2r`1 2r ` 1 where BpXi, ϵq “ tX P p0, 1q : dpX, Xiq ă ϵu. Lemma 2. For any n P N`, the p.d.f. (2) of X1, . . . , Xn (due to the exchangeability, we can assume X1 ă X2 ă ¨ ¨ ¨ ă Xn without loss of generality) is maximized when and only when dpXi, Xi´1q “ sin ` 1 n ` 1 ˘ for all 2 ď i ď n. According to Lemma 1 and Lemma 2, Corp will nudge the x’s to be spread out within r0, 1s, and penalizes the case when two x’s get too close. Figure 1 presents some simulations from Corp. This nudge becomes stronger as the sample size n grows, or as the repulsive parameter r grows. The properties of Corp makes it ideal for strongly favoring spread out latent positions across the manifold, avoiding the gaps and clustering in small regions that plague GP-LVM-type methods. The proofs for the lemmas and a simulation algorithm based on rejection sampling can be found in the supplement. 2.3 Multivariate Corp Definition 2. A p-dimensional multivariate process is a Coulomb repulsive process if and only if for every finite set of indices t1, . . . , tk in the index set N`, Xm,t1 „ unifp0, 1q, for m “ 1, . . . , p ppX X Xti|X X Xt1, . . . ,X X Xti´1q9Πi´1 j“1 „ p`1 ÿ m“1 pYm,ti ´ Ym,tjq2 r 1XtiPp0,1q, i ą 1 3 where the p-dimensional spherical coordinates X X Xt’s have been converted into the pp ` 1qdimensional Cartesian coordinates YYY t: Y1,t “ cosp2πX1,tq Y2,t “ sinp2πX1,tq cosp2πX2,tq ... Yp,t “ sinp2πX1,tq sinp2πX2,tq . . . sinp2πXp´1,tq cosp2πXp,tq Yp`1,t “ sinp2πX1,tq sinp2πX2,tq . . . sinp2πXp´1,tq sinp2πXp,tq. The multivariate Corp maps the hyper-cubic p0, 1qp through a spherical coordinate system to a unit hyper-ball in ℜp`1. The repulsion is then defined as the reciprocal of the square Euclidean distances between these mapped points in ℜp`1. Based on this construction of multivariate Corp, a straightfoward generalization of the electroGP model to a p-dimensional manifold could be made, where p ą 1. 3 Electrostatic Gaussian Process 3.1 Formulation and Model Fitting In this section, we propose the electrostatic Gaussian process (electroGP) model. Assuming n ddimensional data vectors yyy1, . . . ,yyyn are observed, the model is given by yi,j “ µjpxiq ` ϵi,j, ϵi,j „ Np0, σ2 j q, xi „ Corpprq, i “ 1, . . . , n, µj „ GPp0, Kjq, j “ 1, . . . , d, (3) where yyyi “ pyi,1, . . . , yi,dq for i “ 1, . . . , n and GPp0, Kjq denotes a Gaussian process prior with covariance function Kjpx, yq “ φj exp ␣ ´ αjpx ´ yq2( . Letting Θ “ pσ2 1, α1, φ1, . . . , σ2 d, αd, φdq denote the model hyperparameters, model (3) could be fitted by maximizing the joint posterior distribution of xxx “ px1, . . . .xnq and Θ, pˆxxx, ˆΘq “ arg max xxx.Θ ppxxx|yyy1:n, Θ, rq, (4) where the repulsive parameter r is fixed and can be tuned using cross validation. Based on our experience, setting r “ 1 always yields good results, and hence is used as a default across this paper. For the simplicity of notations, r is excluded in the remainder. The above optimization problem can be rewritten as pˆxxx, ˆΘq “ arg max xxx.Θ ℓpyyy1:n|xxx, Θq ` log “ πpxxxq ‰ , where ℓp¨q denotes the log likelihood function and πp¨q denotes the finite dimensional pdf of Corp. Hence the Corp prior can also be viewed as a repulsive constraint in the optimization problem. It can be easily checked that log “ πpxi “ xjq ‰ “ ´8, for any i and j. Starting at initial values x0, the optimizer will converge to a local solution that maintains the same order as the initial x0’s. We refer to this as the self-truncation property. We find that conditionally on the starting order, the optimization algorithm converges rapidly and yields stable results. Although the x’s are not identifiable, since the target function (4) is invariant under rotation, a unique solution does exist conditionally on the specified order. Self-truncation raises the necessity of finding good initial values, or at least a good initial ordering for x’s. Fortunately, in our experience, simply applying any standard manifold learning algorithm to estimate x0 in a manner that preserves distances in Y yields good performance. We find very similar results using LLE, Isomap and eigenmap, but focus on LLE in all our implementations. Our algorithm can be summarized as follows. 1. Learn the one dimensional coordinate xxx0 by your favorite distance-preserving manifold learning algorithm and rescale xxx0 into p0, 1q; 4 Figure 2: Visualization of three simulation experiments where the data (triangles) are simulated from a bivariate Gaussian (left), a rotated parabola with Gaussian noises (middle) and a spiral with Gaussian noises (right). The dotted shading denotes the 95% posterior predictive uncertainty band of py1, y2q under electroGP. The black curve denotes the posterior mean curve under electroGP and the red curve denotes the P-curve. The three dashed curves denote three realizations from GP-LVM. The middle panel shows a zoom-in region and the full figure is shown in the embedded box. 2. Solve Θ0 “ arg maxΘ ppyyy1:n|xxx0, Θ, rq using scaled conjugate gradient descent (SCG); 3. Using SCG, setting xxx0 and Θ0 to be the initial values, solve ˆxxx and ˆΘ w.r.t. (4). 3.2 Posterior Mean Curve and Uncertainty Bands In this subsection, we describe how to obtain a point estimate of the curve µµµ and how to characterize its uncertainty under electroGP. Such point and interval estimation is as of yet unsolved in the literature, and is of critical importance. In particular, it is difficult to interpret a single point estimate without some quantification of how uncertain that estimate is. We use the posterior mean curve ˆµµµ “ Epµµµ|ˆxxx,yyy1:n, ˆΘq as the Bayes optimal estimator under squared error loss. As a curve, ˆµµµ has infinite dimensions. Hence, in order to store and visualize it, we discretize r0, 1s to obtain nµ equally-spaced grid points xµ i “ i´1 nµ´1 for i “ 1, . . . , nµ. Using basic multivariate Gaussian theory, the following expectation is easy to compute. `ˆµµµpxµ 1q, . . . , ˆµµµpxµ nµq ˘ “ E ` µµµpxµ 1q, . . . ,µµµpxµ nµq|ˆxxx,yyy1:n, ˆΘ ˘ . Then ˆµµµ is approximated by linear interpolation using ␣ xµ i , ˆµµµpxµ i q (nµ i“1. For ease of notation, we use ˆµµµ to denote this interpolated piecewise linear curve later on. Examples can be found in Figure 2 where all the mean curves (black solid) were obtained using the above method. Estimating an uncertainty region including data points with η probability is much more challenging. We addressed this problem by the following heuristic algorithm. Step 1. Draw x˚ i ’s from Unif(0,1) independently for i “ 1, . . . , n1; Step 2. Sample the corresponding yyy˚ i from the posterior predictive distribution conditional on these latent coordinates ppyyy˚ 1, . . . ,yyy˚ n1|x˚ 1:n1, ˆxxx,yyy1:n, ˆΘq; Step 3. Repeat steps 1-2 n2 times, collecting all n1 ˆ n2 samples yyy˚’s; Step 4. Find the shortest distances from these yyy˚’s to the posterior mean curve ˆµµµ, and find the η-quantile of these distances denoted by ρ; Step 5. Moving a radius-ρ ball through the entire curve ˆµµµpr0, 1sq, the envelope of the moving trace defines the η% uncertainty band. Note that step 4 can be easily solved since ˆµµµ is a piecewise linear curve. Examples can be found in Figure 2, where the 95% uncertainty bands (dotted shading) were found using the above algorithm. 5 Figure 3: The zoom-in of the spiral case 3 (left) and the corresponding coordinate function, µ2pxq, of electroGP (middle) and GP-LVM (right). The gray shading denotes the heatmap of the posterior distribution of px, y2q and the black curve denotes the posterior mean. 3.3 Simulation In this subsection, we compare the performance of electroGP with GP-LVM and principal curves (Pcurve) in simple simulation experiments. 100 data points were sampled from each of the following three 2-dimensional distributions: a Gaussian distribution, a rotated parabola with Gaussian noises and a spiral with Gaussian noises. ElectroGP and GP-LVM were fitted using the same initial values obtained from LLE, and the P-Curve was fitted using the princurve package in R. The performance of the three methods is compared in Figure 2. The dotted shading represents a 95% posterior predictive uncertainty band for a new data point yyyn`1 under the electroGP model. This illustrates that electroGP obtains an excellent fit to the data, provides a good characterization of uncertainty, and accurately captures the concentration near a 1d manifold embedded in two dimensions. The P-curve is plotted in red. The extremely poor representation of P-curve is as expected based on our experience in fitting principal curve in a wide variety of cases; the behavior is highly unstable. In the first two cases, the P-Curve corresponds to a smooth curve through the center of the data, but for the more complex manifold in the third case, the P-Curve is an extremely poor representation. This tendency to cut across large regions of near zero data density for highly curved manifolds is common for P-Curve. For GP-LVM, we show three random realizations (dashed) from the posterior in each case. It is clear the results are completely unreliable, with the tendency being to place part of the curve through where the data have high density, while also erratically adding extra outside the range of the data. The GP-LVM model does not appropriately penalize such extra parts, and the very poor performance shown in the top right of Figure 2 is not unusual. We find that electroGP in general performs dramatically better than competitors. More simulation results can be found in the supplement. To better illustrate the results for the spiral case 3, we zoom in and present some further comparisons of GP-LVM and electroGP in Figure 3. As can be seen the right panel, optimizing x’s without any constraint results in “holes” on r0, 1s. The trajectories of the Gaussian process over these holes will become arbitrary, as illustrated by the three realizations. This arbitrariness will be further projected into the input space Y, resulting in the erratic curve observed in the left panel. Failing to have well spread out x’s over r0, 1s not only causes trouble in learning the curve, but also makes the posterior predictive distribution of yyyn`1 overly diffuse near these holes, e.g., the large gray shading area in the right panel. The middle panel shows that electroGP fills in these holes by softly constraining the latent coordinates x’s to spread out while still allowing the flexibility of moving them around to find a smooth curve snaking through them. 3.4 Prediction Broad prediction problems can be formulated as the following missing data problem. Assume m new data zzzi, for i “ 1, . . . , m, are partially observed and the missing entries are to be filled in. Letting zzzO i denote the observed data vector and zzzM i denote the missing part, the conditional distribution of 6 Original Observed electroGP GP-LVM Figure 4: Left Panel: Three randomly selected reconstructions using electroGP compared with those using Bayesian GP-LVM; Right Panel: Another three reconstructions from electroGP, with the first row presenting the original images, the second row presenting the observed images and the third row presenting the reconstructions. the missing data is given by ppzzzM 1:m|zzzO 1:m, ˆxxx,yyy1:n, ˆΘq “ ż xz 1 ¨ ¨ ¨ ż xzm ppzzzM 1:m|xz 1:m, ˆxxx,yyy1:n, ˆΘq ˆ ppxz 1:m|zzzO 1:m, ˆxxx,yyy1:n, ˆΘqdxz 1 ¨ ¨ ¨ dxz m, where xz i is the corresponding latent coordinate of zzzi, for i “ 1, . . . , n. However, dealing with pxz 1, . . . , xz mq jointly is intractable due to the high non-linearity of the Gaussian process, which motivates the following approximation, ppxz 1:m|zzzO 1:m, ˆxxx,yyy1:n, ˆΘq « Πm i“1ppxz i |zzzO i , ˆxxx,yyy1:n, ˆΘq. The approximation assumes pxz 1, . . . , xz mq to be conditionally independent. This assumption is more accurate if ˆxxx is well spread out on p0, 1q, as is favored by Corp. The univariate distribution ppxz i |xxxO i ,yyy1:n, ˆuuu, ˆΘq, though still intractable, is much easier to deal with. Depending on the purpose of the application, either a Metropolis Hasting algorithm could be adopted to sample from the predictive distribution, or a optimization method could be used to find the MAP of xz’s. The details of both algorithms can be found in the supplement. 4 Experiments Video-inpainting 200 consecutive frames (of size 76 ˆ 101 with RGB color) [13] were collected from a video of a teapot rotating 1800. Clearly these images roughly lie on a curve. 190 of the frames were assumed to be fully observed in the natural time order of the video, while the other 10 frames were given without any ordering information. Moreover, half of the pixels of these 10 frames were missing. The electroGP was fitted based on the other 190 frames and was used to reconstruct the broken frames and impute the reconstructed frames into the whole frame series with the correct order. The reconstruction results are presented in Figure 4. As can be seen, the reconstructed images are almost indistinguishable from the original ones. Note that these 10 frames were also correctly imputed into the video with respect to their latent position x’s. ElectroGP was compared with Bayesian GP-LVM [7] with the latent dimension set to 1. The reconstruction mean square error (MSE) using electroGP is 70.62, compared to 450.75 using GP-LVM. The comparison is also presented in Figure 4. It can be seen that electroGP outperforms Bayesian GP-LVM in highresolution precision (e.g., how well they reconstructed the handle of the teapot) since it obtains a much tighter and more precise estimate of the manifold. Super-resolution & Denoising 100 consecutive frames (of size 100 ˆ 100 with gray color) were collected from a video of a shrinking shockwave. Frame 51 to 55 were assumed completely missing and the other 95 frames were observed with the original time order with strong white noises. The shockwave is homogeneous in all directions from the center; hence, the frames roughly lie on a curve. The electroGP was applied for two tasks: 1. Frame denoising; 2. Improving resolution by interpolating frames in between the existing frames. Note that the second task is hard since there are 7 Original Noisy electroGP NLM IsD electroGP LI Figure 5: Row 1: From left to right are the original 95th frame, its noisy observation, its denoised result by electroGP, NLM and IsD; Row 2: From left to right are the original 53th frame, its regeneration by electroGP, the residual image (10 times of the absolute error between the imputation and the original) of electroGP and LI. The blank area denotes its missing observation. 5 consecutive frames missing and they can be interpolated only if the electroGP correctly learns the underlying manifold. The denoising performance was compared with non-local mean filter (NLM) [14] and isotropic diffusion (IsD) [15]. The interpolation performance was compared with linear interpolation (LI). The comparison is presented in Figure 5. As can be clearly seen, electroGP greatly outperforms other methods since it correctly learned this one-dimensional manifold. To be specific, the denoising MSE using electroGP is only 1.8 ˆ 10´3, comparing to 63.37 using NLM and 61.79 using IsD. The MSE of reconstructing the entirely missing frame 53 using electroGP is 2 ˆ 10´5 compared to 13 using LI. An online video of the super-resolution result using electroGP can be found in this link1. The frame per second (fps) of the generated video under electroGP was tripled compared to the original one. Though over two thirds of the frames are pure generations from electroGP, this new video flows quite smoothly. Another noticeable thing is that the 5 missing frames were perfectly regenerated by electroGP. 5 Discussion Manifold learning has dramatic importance in many applications where high-dimensional data are collected with unknown low dimensional manifold structure. While most of the methods focus on finding lower dimensional summaries or characterizing the joint distribution of the data, there is (to our knowledge) no reliable method for probabilistic learning of the manifold. This turns out to be a daunting problem due to major issues with identifiability leading to unstable and generally poor performance for current probabilistic non-linear dimensionality reduction methods. It is not obvious how to incorporate appropriate geometric constraints to ensure identifiability of the manifold without also enforcing overly-restrictive assumptions about its form. We tackled this problem in the one-dimensional manifold (curve) case and built a novel electrostatic Gaussian process model based on the general framework of GP-LVM by introducing a novel Coulomb repulsive process. Both simulations and real world data experiments showed excellent performance of the proposed model in accurately estimating the manifold while characterizing uncertainty. Indeed, performance gains relative to competitors were dramatic. The proposed electroGP is shown to be applicable to many learning problems including video-inpainting, super-resolution and video-denoising. There are many interesting areas for future study including the development of efficient algorithms for applying the model for multidimensional manifolds, while learning the dimension. 1https://youtu.be/N1BG220J5Js This online video contains no information regarding the authors. 8 References [1] J.B. Tenenbaum, V. De Silva, and J.C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000. [2] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000. [3] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In NIPS, volume 14, pages 585–591, 2001. [4] M. Chen, J. Silva, J. Paisley, C. Wang, D.B. Dunson, and L. Carin. Compressive sensing on manifolds using a nonparametric mixture of factor analyzers: Algorithm and performance bounds. Signal Processing, IEEE Transactions on, 58(12):6140–6155, 2010. [5] Y. Wang, A. Canale, and D.B. Dunson. Scalable multiscale density estimation. arXiv preprint arXiv:1410.7692, 2014. [6] N. Lawrence. Probabilistic non-linear principal component analysis with gaussian process latent variable models. The Journal of Machine Learning Research, 6:1783–1816, 2005. [7] M. Titsias and N. Lawrence. Bayesian gaussian process latent variable model. The Journal of Machine Learning Research, 9:844–851, 2010. [8] Neil D Lawrence and Joaquin Qui˜nonero-Candela. Local distance preservation in the GP-LVM through back constraints. In Proceedings of the 23rd international conference on Machine learning, pages 513–520. ACM, 2006. [9] Raquel Urtasun, David J Fleet, Andreas Geiger, Jovan Popovi´c, Trevor J Darrell, and Neil D Lawrence. Topologically-constrained latent variable models. In Proceedings of the 25th international conference on Machine learning, pages 1080–1087. ACM, 2008. [10] T. Hastie and W. Stuetzle. Principal curves. Journal of the American Statistical Association, 84(406):502–516, 1989. [11] V. Rao, R.P. Adams, and D.B. Dunson. Bayesian inference for mat´ern repulsive processes. arXiv preprint arXiv:1308.1136, 2013. [12] J.B. Hough, M. Krishnapur, Y. Peres, et al. Zeros of Gaussian analytic functions and determinantal point processes, volume 51. American Mathematical Soc., 2009. [13] K.Q. Weinberger and L.K. Saul. An introduction to nonlinear dimensionality reduction by maximum variance unfolding. In AAAI, volume 6, pages 1683–1686, 2006. [14] A. Buades, B. Coll, and J.M. Morel. A non-local algorithm for image denoising. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 2, pages 60–65. IEEE, 2005. [15] P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 12(7):629–639, 1990. 9 | 2015 | 55 |
5,945 | Training Very Deep Networks Rupesh Kumar Srivastava Klaus Greff J¨urgen Schmidhuber The Swiss AI Lab IDSIA / USI / SUPSI {rupesh, klaus, juergen}@idsia.ch Abstract Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures. 1 Introduction & Previous Work Many recent empirical breakthroughs in supervised machine learning have been achieved through large and deep neural networks. Network depth (the number of successive computational layers) has played perhaps the most important role in these successes. For instance, within just a few years, the top-5 image classification accuracy on the 1000-class ImageNet dataset has increased from ∼84% [1] to ∼95% [2, 3] using deeper networks with rather small receptive fields [4, 5]. Other results on practical machine learning problems have also underscored the superiority of deeper networks [6] in terms of accuracy and/or performance. In fact, deep networks can represent certain function classes far more efficiently than shallow ones. This is perhaps most obvious for recurrent nets, the deepest of them all. For example, the n bit parity problem can in principle be learned by a large feedforward net with n binary input units, 1 output unit, and a single but large hidden layer. But the natural solution for arbitrary n is a recurrent net with only 3 units and 5 weights, reading the input bit string one bit at a time, making a single recurrent hidden unit flip its state whenever a new 1 is observed [7]. Related observations hold for Boolean circuits [8, 9] and modern neural networks [10, 11, 12]. To deal with the difficulties of training deep networks, some researchers have focused on developing better optimizers (e.g. [13, 14, 15]). Well-designed initialization strategies, in particular the normalized variance-preserving initialization for certain activation functions [16, 17], have been widely adopted for training moderately deep networks. Other similarly motivated strategies have shown promising results in preliminary experiments [18, 19]. Experiments showed that certain activation functions based on local competition [20, 21] may help to train deeper networks. Skip connections between layers or to output layers (where error is “injected”) have long been used in neural networks, more recently with the explicit aim to improve the flow of information [22, 23, 2, 24]. A related recent technique is based on using soft targets from a shallow teacher network to aid in training deeper student networks in multiple stages [25], similar to the neural history compressor for sequences, where a slowly ticking teacher recurrent net is “distilled” into a quickly ticking student recurrent net by forcing the latter to predict the hidden units of the former [26]. Finally, deep networks can be trained layer-wise to help in credit assignment [26, 27], but this approach is less attractive compared to direct training. 1 Very deep network training still faces problems, albeit perhaps less fundamental ones than the problem of vanishing gradients in standard recurrent networks [28]. The stacking of several non-linear transformations in conventional feed-forward network architectures typically results in poor propagation of activations and gradients. Hence it remains hard to investigate the benefits of very deep networks for a variety of problems. To overcome this, we take inspiration from Long Short Term Memory (LSTM) recurrent networks [29, 30]. We propose to modify the architecture of very deep feedforward networks such that information flow across layers becomes much easier. This is accomplished through an LSTM-inspired adaptive gating mechanism that allows for computation paths along which information can flow across many layers without attenuation. We call such paths information highways. They yield highway networks, as opposed to traditional ‘plain’ networks.1 Our primary contribution is to show that extremely deep highway networks can be trained directly using stochastic gradient descent (SGD), in contrast to plain networks which become hard to optimize as depth increases (Section 3.1). Deep networks with limited computational budget (for which a two-stage training procedure mentioned above was recently proposed [25]) can also be directly trained in a single stage when converted to highway networks. Their ease of training is supported by experimental results demonstrating that highway networks also generalize well to unseen data. 2 Highway Networks Notation We use boldface letters for vectors and matrices, and italicized capital letters to denote transformation functions. 0 and 1 denote vectors of zeros and ones respectively, and I denotes an identity matrix. The function σ(x) is defined as σ(x) = 1 1+e−x , x ∈R. The dot operator (·) is used to denote element-wise multiplication. A plain feedforward neural network typically consists of L layers where the lth layer (l ∈ {1, 2, ..., L}) applies a non-linear transformation H (parameterized by WH,l) on its input xl to produce its output yl. Thus, x1 is the input to the network and yL is the network’s output. Omitting the layer index and biases for clarity, y = H(x, WH). (1) H is usually an affine transform followed by a non-linear activation function, but in general it may take other forms, possibly convolutional or recurrent. For a highway network, we additionally define two non-linear transforms T(x, WT) and C(x, WC) such that y = H(x, WH)· T(x, WT) + x · C(x, WC). (2) We refer to T as the transform gate and C as the carry gate, since they express how much of the output is produced by transforming the input and carrying it, respectively. For simplicity, in this paper we set C = 1 −T, giving y = H(x, WH)· T(x, WT) + x · (1 −T(x, WT)). (3) The dimensionality of x, y, H(x, WH) and T(x, WT) must be the same for Equation 3 to be valid. Note that this layer transformation is much more flexible than Equation 1. In particular, observe that for particular values of T, y = x, if T(x, WT) = 0, H(x, WH), if T(x, WT) = 1. (4) Similarly, for the Jacobian of the layer transform, 1This paper expands upon a shorter report on Highway Networks [31]. More recently, a similar LSTMinspired model was also proposed [32]. 2 Figure 1: Comparison of optimization of plain networks and highway networks of various depths. Left: The training curves for the best hyperparameter settings obtained for each network depth. Right: Mean performance of top 10 (out of 100) hyperparameter settings. Plain networks become much harder to optimize with increasing depth, while highway networks with up to 100 layers can still be optimized well. Best viewed on screen (larger version included in Supplementary Material). dy dx = I, if T(x, WT) = 0, H′(x, WH), if T(x, WT) = 1. (5) Thus, depending on the output of the transform gates, a highway layer can smoothly vary its behavior between that of H and that of a layer which simply passes its inputs through. Just as a plain layer consists of multiple computing units such that the ith unit computes yi = Hi(x), a highway network consists of multiple blocks such that the ith block computes a block state Hi(x) and transform gate output Ti(x). Finally, it produces the block output yi = Hi(x) ∗Ti(x) + xi ∗(1 −Ti(x)), which is connected to the next layer.2 2.1 Constructing Highway Networks As mentioned earlier, Equation 3 requires that the dimensionality of x, y, H(x, WH) and T(x, WT) be the same. To change the size of the intermediate representation, one can replace x with ˆx obtained by suitably sub-sampling or zero-padding x. Another alternative is to use a plain layer (without highways) to change dimensionality, which is the strategy we use in this study. Convolutional highway layers utilize weight-sharing and local receptive fields for both H and T transforms. We used the same sized receptive fields for both, and zero-padding to ensure that the block state and transform gate feature maps match the input size. 2.2 Training Deep Highway Networks We use the transform gate defined as T(x) = σ(WT T x + bT), where WT is the weight matrix and bT the bias vector for the transform gates. This suggests a simple initialization scheme which is independent of the nature of H: bT can be initialized with a negative value (e.g. -1, -3 etc.) such that the network is initially biased towards carry behavior. This scheme is strongly inspired by the proposal [30] to initially bias the gates in an LSTM network, to help bridge long-term temporal dependencies early in learning. Note that σ(x) ∈(0, 1), ∀x ∈R, so the conditions in Equation 4 can never be met exactly. In our experiments, we found that a negative bias initialization for the transform gates was sufficient for training to proceed in very deep networks for various zero-mean initial distributions of WH and different activation functions used by H. In pilot experiments, SGD did not stall for networks with more than 1000 layers. Although the initial bias is best treated as a hyperparameter, as a general guideline we suggest values of -1, -2 and -3 for convolutional highway networks of depth approximately 10, 20 and 30. 2Our pilot experiments on training very deep networks were successful with a more complex block design closely resembling an LSTM block “unrolled in time”. Here we report results only for a much simplified form. 3 Network Highway Networks Maxout [20] DSN [24] 10-layer (width 16) 10-layer (width 32) No. of parameters 39 K 151 K 420 K 350 K Test Accuracy (in %) 99.43 (99.4±0.03) 99.55 (99.54±0.02) 99.55 99.61 Table 1: Test set classification accuracy for pilot experiments on the MNIST dataset. Network No. of Layers No. of Parameters Accuracy (in %) Fitnet Results (reported by Romero et. al.[25]) Teacher 5 ∼9M 90.18 Fitnet A 11 ∼250K 89.01 Fitnet B 19 ∼2.5M 91.61 Highway networks Highway A (Fitnet A) 11 ∼236K 89.18 Highway B (Fitnet B) 19 ∼2.3M 92.46 (92.28±0.16) Highway C 32 ∼1.25M 91.20 Table 2: CIFAR-10 test set accuracy of convolutional highway networks. Architectures tested were based on fitnets trained by Romero et. al. [25] using two-stage hint based training. Highway networks were trained in a single stage without hints, matching or exceeding the performance of fitnets. 3 Experiments All networks were trained using SGD with momentum. An exponentially decaying learning rate was used in Section 3.1. For the rest of the experiments, a simpler commonly used strategy was employed where the learning rate starts at a value λ and decays according to a fixed schedule by a factor γ. λ, γ and the schedule were selected once based on validation set performance on the CIFAR-10 dataset, and kept fixed for all experiments. All convolutional highway networks utilize the rectified linear activation function [16] to compute the block state H. To provide a better estimate of the variability of classification results due to random initialization, we report our results in the format Best (mean ± std.dev.) based on 5 runs wherever available. Experiments were conducted using Caffe [33] and Brainstorm (https://github.com/IDSIA/brainstorm) frameworks. Source code, hyperparameter search results and related scripts are publicly available at http://people. idsia.ch/˜rupesh/very_deep_learning/. 3.1 Optimization To support the hypothesis that highway networks do not suffer from increasing depth, we conducted a series of rigorous optimization experiments, comparing them to plain networks with normalized initialization [16, 17]. We trained both plain and highway networks of varying varying depths on the MNIST digit classification dataset. All networks are thin: each layer has 50 blocks for highway networks and 71 units for plain networks, yielding roughly identical numbers of parameters (≈5000) per layer. In all networks, the first layer is a fully connected plain layer followed by 9, 19, 49, or 99 fully connected plain or highway layers. Finally, the network output is produced by a softmax layer. We performed a random search of 100 runs for both plain and highway networks to find good settings for the following hyperparameters: initial learning rate, momentum, learning rate exponential decay factor & activation function (either rectified linear or tanh). For highway networks, an additional hyperparameter was the initial value for the transform gate bias (between -1 and -10). Other weights were initialized using the same normalized initialization as plain networks. The training curves for the best performing networks for each depth are shown in Figure 1. As expected, 10 and 20-layer plain networks exhibit very good performance (mean loss < 1e−4), which significantly degrades as depth increases, even though network capacity increases. Highway networks do not suffer from an increase in depth, and 50/100 layer highway networks perform similar to 10/20 layer networks. The 100-layer highway network performed more than 2 orders of magnitude better compared to a similarly-sized plain network. It was also observed that highway networks consistently converged significantly faster than plain ones. 4 Network CIFAR-10 Accuracy (in %) CIFAR-100 Accuracy (in %) Maxout [20] 90.62 61.42 dasNet [36] 90.78 66.22 NiN [35] 91.19 64.32 DSN [24] 92.03 65.43 All-CNN [37] 92.75 66.29 Highway Network 92.40 (92.31±0.12) 67.76 (67.61±0.15) Table 3: Test set accuracy of convolutional highway networks on the CIFAR-10 and CIFAR-100 object recognition datasets with typical data augmentation. For comparison, we list the accuracy reported by recent studies in similar experimental settings. 3.2 Pilot Experiments on MNIST Digit Classification As a sanity check for the generalization capability of highway networks, we trained 10-layer convolutional highway networks on MNIST, using two architectures, each with 9 convolutional layers followed by a softmax output. The number of filter maps (width) was set to 16 and 32 for all the layers. We obtained test set performance competitive with state-of-the-art methods with much fewer parameters, as show in Table 1. 3.3 Experiments on CIFAR-10 and CIFAR-100 Object Recognition 3.3.1 Comparison to Fitnets Fitnet training Maxout networks can cope much better with increased depth than those with traditional activation functions [20]. However, Romero et. al. [25] recently reported that training on CIFAR-10 through plain backpropogation was only possible for maxout networks with a depth up to 5 layers when the number of parameters was limited to ∼250K and the number of multiplications to ∼30M. Similar limitations were observed for higher computational budgets. Training of deeper networks was only possible through the use of a two-stage training procedure and addition of soft targets produced from a pre-trained shallow teacher network (hint-based training). We found that it was easy to train highway networks with numbers of parameters and operations comparable to those of fitnets in a single stage using SGD. As shown in Table 2, Highway A and Highway B, which are based on the architectures of Fitnet A and Fitnet B, respectively, obtain similar or higher accuracy on the test set. We were also able to train thinner and deeper networks: for example a 32-layer highway network consisting of alternating receptive fields of size 3x3 and 1x1 with ∼1.25M parameters performs better than the earlier teacher network [20]. 3.3.2 Comparison to State-of-the-art Methods It is possible to obtain high performance on the CIFAR-10 and CIFAR-100 datasets by utilizing very large networks and extensive data augmentation. This approach was popularized by Ciresan et. al. [5] and recently extended by Graham [34]. Since our aim is only to demonstrate that deeper networks can be trained without sacrificing ease of training or generalization ability, we only performed experiments in the more common setting of global contrast normalization, small translations and mirroring of images. Following Lin et. al. [35], we replaced the fully connected layer used in the networks in the previous section with a convolutional layer with a receptive field of size one and a global average pooling layer. The hyperparameters from the last section were re-used for both CIFAR-10 and CIFAR-100, therefore it is quite possible to obtain much better results with better architectures/hyperparameters. The results are tabulated in Table 3. 4 Analysis Figure 2 illustrates the inner workings of the best3 50 hidden layer fully-connected highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The first three columns show 3obtained via random search over hyperparameters to minimize the best training set error achieved using each configuration 5 Figure 2: Visualization of best 50 hidden-layer highway networks trained on MNIST (top row) and CIFAR-100 (bottom row). The first hidden layer is a plain layer which changes the dimensionality of the representation to 50. Each of the 49 highway layers (y-axis) consists of 50 blocks (x-axis). The first column shows the transform gate biases, which were initialized to -2 and -4 respectively. In the second column the mean output of the transform gate over all training examples is depicted. The third and fourth columns show the output of the transform gates and the block outputs (both networks using tanh) for a single random training sample. Best viewed in color. the bias, the mean activity over all training samples, and the activity for a single random sample for each transform gate respectively. Block outputs for the same single sample are displayed in the last column. The transform gate biases of the two networks were initialized to -2 and -4 respectively. It is interesting to note that contrary to our expectations most biases decreased further during training. For the CIFAR-100 network the biases increase with depth forming a gradient. Curiously this gradient is inversely correlated with the average activity of the transform gates, as seen in the second column. This indicates that the strong negative biases at low depths are not used to shut down the gates, but to make them more selective. This behavior is also suggested by the fact that the transform gate activity for a single example (column 3) is very sparse. The effect is more pronounced for the CIFAR-100 network, but can also be observed to a lesser extent in the MNIST network. The last column of Figure 2 displays the block outputs and visualizes the concept of “information highways”. Most of the outputs stay constant over many layers forming a pattern of stripes. Most of the change in outputs happens in the early layers (≈15 for MNIST and ≈40 for CIFAR-100). 4.1 Routing of Information One possible advantage of the highway architecture over hard-wired shortcut connections is that the network can learn to dynamically adjust the routing of the information based on the current input. This begs the question: does this behaviour manifest itself in trained networks or do they just learn a static routing that applies to all inputs similarly. A partial answer can be found by looking at the mean transform gate activity (second column) and the single example transform gate outputs (third column) in Figure 2. Especially for the CIFAR-100 case, most transform gates are active on average, while they show very selective activity for the single example. This implies that for each sample only a few blocks perform transformation but different blocks are utilized by different samples. This data-dependent routing mechanism is further investigated in Figure 3. In each of the columns we show how the average over all samples of one specific class differs from the total average shown in the second column of Figure 2. For MNIST digits 0 and 7 substantial differences can be seen 6 Figure 3: Visualization showing the extent to which the mean transform gate activity for certain classes differs from the mean activity over all training samples. Generated using the same 50-layer highway networks on MNIST on CIFAR-100 as Figure 2. Best viewed in color. within the first 15 layers, while for CIFAR class numbers 0 and 1 the differences are sparser and spread out over all layers. In both cases it is clear that the mean activity pattern differs between classes. The gating system acts not just as a mechanism to ease training, but also as an important part of the computation in a trained network. 4.2 Layer Importance Since we bias all the transform gates towards being closed, in the beginning every layer mostly copies the activations of the previous layer. Does training indeed change this behaviour, or is the final network still essentially equivalent to a network with a much fewer layers? To shed light on this issue, we investigated the extent to which lesioning a single layer affects the total performance of trained networks from Section 3.1. By lesioning, we mean manually setting all the transform gates of a layer to 0 forcing it to simply copy its inputs. For each layer, we evaluated the network on the full training set with the gates of that layer closed. The resulting performance as a function of the lesioned layer is shown in Figure 4. For MNIST (left) it can be seen that the error rises significantly if any one of the early layers is removed, but layers 15 −45 seem to have close to no effect on the final performance. About 60% of the layers don’t learn to contribute to the final result, likely because MNIST is a simple dataset that doesn’t require much depth. We see a different picture for the CIFAR-100 dataset (right) with performance degrading noticeably when removing any of the first ≈40 layers. This suggests that for complex problems a highway network can learn to utilize all of its layers, while for simpler problems like MNIST it will keep many of the unneeded layers idle. Such behavior is desirable for deep networks in general, but appears difficult to obtain using plain networks. 5 Discussion Alternative approaches to counter the difficulties posed by depth mentioned in Section 1 often have several limitations. Learning to route information through neural networks with the help of competitive interactions has helped to scale up their application to challenging problems by improving credit assignment [38], but they still suffer when depth increases beyond ≈20 even with careful initialization [17]. Effective initialization methods can be difficult to derive for a variety of activation functions. Deep supervision [24] has been shown to hurt performance of thin deep networks [25]. Very deep highway networks, on the other hand, can directly be trained with simple gradient descent methods due to their specific architecture. This property does not rely on specific non-linear transformations, which may be complex convolutional or recurrent transforms, and derivation of a suitable initialization scheme is not essential. The additional parameters required by the gating mechanism help in routing information through the use of multiplicative connections, responding differently to different inputs, unlike fixed “skip” connections. 7 Figure 4: Lesioned training set performance (y-axis) of the best 50-layer highway networks on MNIST (left) and CIFAR-100 (right), as a function of the lesioned layer (x-axis). Evaluated on the full training set while forcefully closing all the transform gates of a single layer at a time. The non-lesioned performance is indicated as a dashed line at the bottom. A possible objection is that many layers might remain unused if the transform gates stay closed. Our experiments show that this possibility does not affect networks adversely—deep and narrow highway networks can match/exceed the accuracy of wide and shallow maxout networks, which would not be possible if layers did not perform useful computations. Additionally, we can exploit the structure of highways to directly evaluate the contribution of each layer as shown in Figure 4. For the first time, highway networks allow us to examine how much computation depth is needed for a given problem, which can not be easily done with plain networks. Acknowledgments We thank NVIDIA Corporation for their donation of GPUs and acknowledge funding from the EU project NASCENCE (FP7-ICT-317662). We are grateful to Sepp Hochreiter and Thomas Unterthiner for helpful comments and Jan Koutn´ık for help in conducting experiments. References [1] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems, 2012. [2] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv:1409.4842 [cs], September 2014. [3] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 [cs], September 2014. [4] DC Ciresan, Ueli Meier, Jonathan Masci, Luca M Gambardella, and J¨urgen Schmidhuber. Flexible, high performance convolutional neural networks for image classification. In IJCAI, 2011. [5] Dan Ciresan, Ueli Meier, and J¨urgen Schmidhuber. Multi-column deep neural networks for image classification. In IEEE Conference on Computer Vision and Pattern Recognition, 2012. [6] Dong Yu, Michael L. Seltzer, Jinyu Li, Jui-Ting Huang, and Frank Seide. Feature learning in deep neural networks-studies on speech recognition tasks. arXiv preprint arXiv:1301.3605, 2013. [7] Sepp Hochreiter and Jurgen Schmidhuber. Bridging long time lags by weight guessing and “long shortterm memory”. Spatiotemporal models in biological and artificial systems, 37:65–72, 1996. [8] Johan H˚astad. Computational limitations of small-depth circuits. MIT press, 1987. [9] Johan H˚astad and Mikael Goldmann. On the power of small-depth threshold circuits. Computational Complexity, 1(2):113–129, 1991. [10] Monica Bianchini and Franco Scarselli. On the complexity of neural network classifiers: A comparison between shallow and deep architectures. IEEE Transactions on Neural Networks, 2014. 8 [11] Guido F Montufar, Razvan Pascanu, Kyunghyun Cho, and Yoshua Bengio. On the number of linear regions of deep neural networks. In Advances in Neural Information Processing Systems. 2014. [12] James Martens and Venkatesh Medabalimi. On the expressive efficiency of sum product networks. arXiv:1411.7717 [cs, stat], November 2014. [13] James Martens and Ilya Sutskever. Training deep and recurrent networks with hessian-free optimization. Neural Networks: Tricks of the Trade, pages 1–58, 2012. [14] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. pages 1139–1147, 2013. [15] Yann N Dauphin, Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Surya Ganguli, and Yoshua Bengio. Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. In Advances in Neural Information Processing Systems 27, pages 2933–2941. 2014. [16] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In International Conference on Artificial Intelligence and Statistics, pages 249–256, 2010. [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification. arXiv:1502.01852 [cs], February 2015. [18] David Sussillo and L. F. Abbott. Random walk initialization for training very deep feedforward networks. arXiv:1412.6558 [cs, stat], December 2014. [19] Andrew M. Saxe, James L. McClelland, and Surya Ganguli. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arXiv:1312.6120 [cond-mat, q-bio, stat], December 2013. [20] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. arXiv:1302.4389 [cs, stat], February 2013. [21] Rupesh K. Srivastava, Jonathan Masci, Sohrob Kazerounian, Faustino Gomez, and J¨urgen Schmidhuber. Compete to compute. In Advances in Neural Information Processing Systems, pages 2310–2318, 2013. [22] Tapani Raiko, Harri Valpola, and Yann LeCun. Deep learning made easier by linear transformations in perceptrons. In International Conference on Artificial Intelligence and Statistics, pages 924–932, 2012. [23] Alex Graves. Generating sequences with recurrent neural networks. arXiv:1308.0850, 2013. [24] Chen-Yu Lee, Saining Xie, Patrick Gallagher, Zhengyou Zhang, and Zhuowen Tu. Deeply-supervised nets. pages 562–570, 2015. [25] Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. FitNets: Hints for thin deep nets. arXiv:1412.6550 [cs], December 2014. [26] J¨urgen Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234–242, March 1992. [27] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh. A fast learning algorithm for deep belief nets. Neural computation, 18(7):1527–1554, 2006. [28] Sepp Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Masters thesis, Technische Universit¨at M¨unchen, M¨unchen, 1991. [29] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural Computation, 9(8):1735– 1780, November 1997. [30] Felix A. Gers, J¨urgen Schmidhuber, and Fred Cummins. Learning to forget: Continual prediction with LSTM. In ICANN, volume 2, pages 850–855, 1999. [31] Rupesh Kumar Srivastava, Klaus Greff, and J¨urgen Schmidhuber. Highway networks. arXiv:1505.00387 [cs], May 2015. [32] Nal Kalchbrenner, Ivo Danihelka, and Alex Graves. Grid long Short-Term memory. arXiv:1507.01526 [cs], July 2015. [33] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv:1408.5093 [cs], 2014. [34] Benjamin Graham. Spatially-sparse convolutional neural networks. arXiv:1409.6070, September 2014. [35] Min Lin, Qiang Chen, and Shuicheng Yan. Network in network. arXiv:1312.4400, 2014. [36] Marijn F Stollenga, Jonathan Masci, Faustino Gomez, and J¨urgen Schmidhuber. Deep networks with internal selective attention through feedback connections. In NIPS. 2014. [37] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller. Striving for simplicity: The all convolutional net. arXiv:1412.6806 [cs], December 2014. [38] Rupesh Kumar Srivastava, Jonathan Masci, Faustino Gomez, and J¨urgen Schmidhuber. Understanding locally competitive networks. In International Conference on Learning Representations, 2015. 9 | 2015 | 56 |
5,946 | Fast and Memory Optimal Low-Rank Matrix Approximation Se-Young Yun MSR, Cambridge seyoung.yun@inria.fr Marc Lelarge ∗ Inria & ENS marc.lelarge@ens.fr Alexandre Proutiere † KTH, EE School / ACL alepro@kth.se Abstract In this paper, we revisit the problem of constructing a near-optimal rank k approximation of a matrix M ∈[0, 1]m×n under the streaming data model where the columns of M are revealed sequentially. We present SLA (Streaming Low-rank Approximation), an algorithm that is asymptotically accurate, when ksk+1(M) = o(√mn) where sk+1(M) is the (k + 1)-th largest singular value of M. This means that its average mean-square error converges to 0 as m and n grow large (i.e., ∥ˆ M (k)−M (k)∥2 F = o(mn) with high probability, where ˆ M (k) and M (k) denote the output of SLA and the optimal rank k approximation of M, respectively). Our algorithm makes one pass on the data if the columns of M are revealed in a random order, and two passes if the columns of M arrive in an arbitrary order. To reduce its memory footprint and complexity, SLA uses random sparsification, and samples each entry of M with a small probability δ. In turn, SLA is memory optimal as its required memory space scales as k(m+n), the dimension of its output. Furthermore, SLA is computationally efficient as it runs in O(δkmn) time (a constant number of operations is made for each observed entry of M), which can be as small as O(k log(m)4n) for an appropriate choice of δ and if n ≥m. 1 Introduction We investigate the problem of constructing, in a memory and computationally efficient manner, an accurate estimate of the optimal rank k approximation M (k) of a large (m × n) matrix M ∈[0, 1]m×n. This problem is fundamental in machine learning, and has naturally found numerous applications in computer science. The optimal rank k approximation M (k) minimizes, over all rank k matrices Z, the Frobenius norm ∥M −Z∥F (and any norm that is invariant under rotation) and can be computed by Singular Value Decomposition (SVD) of M in O(nm2) time (if we assume that m ≤n). For massive matrices M (i.e., when m and n are very large), this becomes unacceptably slow. In addition, storing and manipulating M in memory may become difficult. In this paper, we design a memory and computationally efficient algorithm, referred to as Streaming Low-rank Approximation (SLA), that computes a near-optimal rank k approximation ˆ M (k). Under mild assumptions on M, the SLA algorithm is asymptotically accurate in the sense that as m and n grow large, its average mean-square error converges to 0, i.e., ∥ˆ M (k) −M (k)∥2 F = o(mn) with high probability (we interpret M (k) as the signal that we aim to recover form a noisy observation M). To reduce its memory footprint and running time, the proposed algorithm combines random sparsification and the idea of the streaming data model. More precisely, each entry of M is revealed to the algorithm with probability δ, called the sampling rate. Moreover, SLA observes and treats the ∗Work performed as part of MSR-INRIA joint research centre. M.L. acknowledges the support of the French Agence Nationale de la Recherche (ANR) under reference ANR-11-JS02-005-01 (GAP project). †A. Proutiere’s research is supported by the ERC FSA grant, and the SSF ICT-Psi project. 1 columns of M one after the other in a sequential manner. The sequence of observed columns may be chosen uniformly at random in which case the algorithm requires one pass on M only, or can be arbitrary in which case the algorithm needs two passes. SLA first stores ℓ= 1/(δ log(m)) randomly selected columns, and extracts via spectral decomposition an estimator of parts of the k top right singular vectors of M. It then completes the estimator of these vectors by receiving and treating the remain columns sequentially. SLA finally builds, from the estimated top k right singular vectors, the linear projection onto the subspace generated by these vectors, and deduces an estimator of M (k). The analysis of the performance of SLA is presented in Theorems 7, and 8. In summary: when m ≤n, log4(m) m ≤δ ≤m−8/9, with probability 1 −kδ, the output ˆ M (k) of SLA satisfies: ∥M (k) −ˆ M (k)∥2 F mn = O k2 s2 k+1(M) mn + log(m) √ δm , (1) where sk+1(M) is the (k + 1)-th singular value of M. SLA requires O(kn) memory space, and if δ ≥log4(m) m and k ≤log6(m), its time is O(δkmn). To ensure the asymptotic accuracy of SLA, the upper-bound in (1) needs to converge to 0 which is true as soon as ksk+1(M) = o(√mn). In the case where M is seen as a noisy version of M (k), this condition quantifies the maximum amount of noise allowed for our algorithm to be asymptotically accurate. SLA is memory optimal, since any rank k approximation algorithm needs to at least store its output, i.e., k right and left singular vectors, and hence needs at least O(kn) memory space. Further observe that among the class of algorithms sampling each entry of M at a given rate δ, SLA is computational optimal, since it runs in O(δkmn) time (it does a constant number of operations per observed entry if k = O(1)). In turn, to the best of our knowledge, SLA is both faster and more memory efficient than existing algorithms. SLA is the first memory optimal and asymptotically accurate low rank approximation algorithm. The approach used to design SLA can be readily extended to devise memory and computationally efficient matrix completion algorithms. We present this extension in the supplementary material. Notations. Throughout the paper, we use the following notations. For any m × n matrix A, we denote by A⊤its transpose, and by A−1 its pseudo-inverse. We denote by s1(A) ≥· · · ≥sn∧m(A) ≥ 0, the singular values of A. When matrices A and B have the same number of rows, [A, B] to denote the matrix whose first columns are those of A followed by those of B. A⊥denotes an orthonormal basis of the subspace perpendicular to the linear span of the columns of A. Aj, Ai, and Aij denote the j-th column of A, the i-th row of A, and the entry of A on the i-th line and j-th column, respectively. For h ≤l, Ah:l (resp. Ah:l) is the matrix obtained by extracting the columns (resp. lines) h, . . . , l of A. For any ordered set B = {b1, . . . , bp} ⊂{1, . . . , n}, A(B) refers to the matrix composed by the ordered set B of columns of A. A(B) is defined similarly (but for lines). For real numbers a ≤b, we define |A|b a the matrix with (i, j) entry equal to (|A|b a)ij = min(b, max(a, Aij)). Finally, for any vector v, ∥v∥denotes its Euclidean norm, whereas for any matrix A, ∥A∥F denotes its Frobenius norm, ∥A∥2 its operator norm, and ∥A∥∞its ℓ∞-norm, i.e., ∥A∥∞= maxi,j |Aij|. 2 Related Work Low-rank approximation algorithms have received a lot of attention over the last decade. There are two types of error estimate for these algorithms: either the error is additive or relative. To translate our bound (1) in an additive error is easy: ∥M −ˆ M (k)∥F ≤∥M −M (k)∥F + O k sk+1(M) √mn + log1/2 m (δm)1/4 ! √mn ! . (2) Sparsifying M to speed-up the computation of a low-rank approximation has been proposed in the literature and the best additive error bounds have been obtained in [AM07]. When the sampling rate δ satisfies δ ≥log4 m m , the authors show that with probability 1 −exp(−log4 m), ∥M −˜ M (k)∥F ≤∥M −M (k)∥F + O k1/2n1/2 δ1/2 + k1/4n1/4 δ1/4 ∥M (k)∥1/2 F . (3) 2 This performance guarantee is derived from Lemma 1.1 and Theorem 1.4 in [AM07]. To compare (2) and (3), note that our assumptions on the bounded entries of M ensures that: s2 k+1(M) mn ≤ 1 k and ∥M (k)∥F ≤∥M∥F ≤√mn. In particular, we see that the worst case bound for (3) is k1/2 √ δm + k1/4 (δm)1/4 √nm which is always lower than the worst case bound for (2): k 1 k + log m √ δm 1/2 √nm. When k = O(1), our bound is only larger by a logarithmic term in m compared to [AM07]. However, the algorithm proposed in [AM07] requires to store O(δmn) entries of M whereas SLA needs O(n) memory space. Recall that log4 m ≤δm ≤m1/9 so that our algorithm makes a significant improvement on the memory requirement at a low price in the error guarantee bounds. Although biased sampling algorithms can reduce the error, the algorithm have to run leverage scores with multiple passes over data [BJS15]. In a recent work, [CW13] proposes a time efficient algorithm to compute a low-rank approximation of a sparse matrix. Combined with [AM07], we obtain an algorithm running in time O(δmn) + O(nk2 + k3) but with an increased additive error term. We can also compare our result to papers providing an estimate ˜ M (k) of the optimal low-rank approximation of M with a relative error ε, i.e. such that ∥M −˜ M (k)∥F ≤(1 + ε)∥M −M (k)∥F . To the best of our knowledge, [CW09] provides the best result in this setting. Theorem 4.4 in [CW09] shows that provided the rank of M is at least 2(k +1), their algorithm outputs with probability 1−η a rank-k matrix ˜ M (k) with relative error ε using memory space O (k/ε log(1/η)(n + m)) (note that in [CW09], the authors use as unit of memory a bit whereas we use as unit of memory an entry of the matrix so we removed a log mn factor in their expression to make fair comparisons). To compare with our result, we can translate our bound (1) in a relative error, and we need to take: ε = O k sk+1(M) + log1/2 m (δm)1/4 √mn ∥M −M (k)∥F . First note that since M is assumed to be of rank at least 2(k + 1), we have ∥M −M (k)∥F ≥ sk+1(M) > 0 and ε is well-defined. Clearly, for our ε to tend to zero, we need ∥M −M (k)∥F to be not too small. For the scenario we have in mind, M is a noisy version of the signal M (k) so that M −M (k) is the noise matrix. When every entry of M −M (k) is generated independently at random with a constant variance, ∥M −M (k)∥F = Θ(√m + n) while sk+1(M) = Θ(√n). In such a case, we have ε = o(1) and we improve the memory requirement of [CW09] by a factor ε−1 log(kδ)−1. [CW09] also considers a model where the full columns of M are revealed one after the other in an arbitrary order, and proposes a one-pass algorithm to derive the rank-k approximation of M with the same memory requirement. In this general setting, our algorithm is required to make two passes on the data (and only one pass if the order of arrival of the column is random instead of arbitrary). The running time of the algorithm scales as O(kmnε−1 log(kδ)−1) to project M onto kε−1 log(kδ)−1 dimensional random space. Thus, SLA improves the time again by a factor of ε−1 log(kδ)−1. We could also think of using sketching and streaming PCA algorithms to estimate M (k). When the columns arrive sequentially, these algorithms identify the left singular vectors using one-pass on the matrix and then need a second pass on the data to estimate the right singular vectors. For example, [Lib13] proposes a sketching algorithm that updates the p most frequent directions as columns are observed. [GP14] shows that with O(km/ε) memory space (for p = k/ε), this sketching algorithm finds m × k matrix ˆU such that ∥M −P ˆUM∥F ≤(1 + ε)∥M −M (k)∥F , where P ˆU denotes the projection matrix to the linear span of the columns of ˆU. The running time of the algorithm is roughly O(kmnε−1), which is much greater than that of SLA. Note also that to identify such matrix ˆU in one pass on M, it is shown in [Woo14] that we have to use Ω(km/ε) memory space. This result does not contradict the performance analysis of SLA, since the latter needs two passes on M if the columns of M are observed in an arbitrary manner. Finally, note that the streaming PCA algorithm proposed in [MCJ13] does not apply to our problem as this paper investigates a very specific problem: the spiked covariance model where a column is randomly generated in an i.i.d. manner. 3 Streaming Low-rank Approximation Algorithm 3 Algorithm 1 Streaming Low-rank Approximation (SLA) Input: M, k, δ, and ℓ= 1 δ log(m) 1. A(B1), A(B2) ←independently sample entries of [M1, . . . , Mℓ] at rate δ 2. PCA for the first ℓcolumns: Q ←SPCA(A(B1), k) 3. Trimming the rows and columns of A(B2): A(B2) ←set the entries of rows of A(B2) having more than two non-zero entries to 0 A(B2) ←set the entries of the columns of A(B2) having more than 10mδ non-zero entries to 0 4. W ←A(B2)Q 5. ˆV (B1) ←(A(B1))⊤W 6. ˆI ←A(B1) ˆV (B1) Remove A(B1), A(B2), and Q from the memory space for t = ℓ+ 1 to n do 7. At ←sample entries of Mt at rate δ 8. ˆV t ←(At)⊤W 9. ˆI ←ˆI + At ˆV t Remove At from the memory space end for 10. ˆR ←find ˆR using the Gram-Schmidt process such that ˆV ˆR is an orthonormal matrix 11. ˆU ←1 ˆδ ˆI ˆR ˆR⊤ Output: ˆ M (k) = | ˆU ˆV ⊤|1 0 Algorithm 2 Spectral PCA (SPCA) Input: C ∈[0, 1]m×ℓ, k Ω←ℓ× k Gaussian random matrix Trimming: ¯C ←set the entries of the rows of C with more than 10 non-zero entries to 0 Φ ←¯C⊤¯C −diag( ¯C⊤¯C) Power Iteration: QR ←QR decomposition of Φ⌈5 log(ℓ)⌉Ω Output: Q In this section, we present the Streaming Low-rank Approximation (SLA) algorithm and analyze its performance. SLA makes one pass on the matrix M, and is provided with the columns of M one after the other in a streaming manner. The SVD of M is M = UΣV ⊤where U and V are (m×m) and (n×n) unitary matrices and Σ is the (m×n) matrix diag(s1(M), . . . sn∧m(M)). We assume (or impose by design of SLA) that the ℓ(specified below) first observed columns of M are chosen uniformly at random among all columns. An extension of SLA to scenarios where columns are observed in an arbitrary order is presented in §3.5, but this extension requires two passes on M. To be memory efficient, SLA uses sampling. Each observed entry of M is erased (i.e., set equal to 0) with probability 1 −δ, where δ > 0 is referred to as the sampling rate. The algorithm, whose pseudo-code is presented in Algorithm 1, proceeds in three steps: 1. In the first step, we observe ℓ= 1 δ log(m) columns of M chosen uniformly at random. These columns form the matrix M(B) = UΣ(V (B))⊤, where B denotes the ordered set of the indexes of the ℓfirst observed columns. M(B) is sampled at rate δ. More precisely, we apply two independent sampling procedures, where in each of them, every entry of M(B) is sampled at rate δ. The two resulting independent random matrices A(B1), and A(B2) are stored in memory. A(B1), referred to as A(B) to simplify the notations, is used in this first step, whereas A(B2) will be used in subsequent steps. Next through a spectral decomposition of A(B), we derive a (ℓ× k) orthonormal matrix Q such that the span of its column vectors approximates that of the column vectors of V (B) 1:k . The first step corresponds to Lines 1 and 2 in the pseudo-code of SLA. 2. In the second step, we complete the construction of our estimator of the top k right singular vectors V1:k of M. Denote by ˆV the k × n matrix formed by these estimated vectors. We first compute the components of these vectors corresponding to the set of indexes B as ˆV (B) = A⊤ (B1)W with W = A(B2)Q. Then for t = ℓ+ 1, . . . , n, after receiving the t-th column Mt of M, we set ˆV t = A⊤ t W, where At is obtained by sampling entries of Mt at rate δ. Hence after one pass on M, we get ˆV = ˜A⊤W, where ˜A = [A(B1), Aℓ+1, . . . , An]. As it turns out, multiplying W by ˜A⊤ amplifies the useful signal contained in W, and yields an accurate approximation of the span of the 4 top k right singular vectors V1:k of M. The second step is presented in Lines 3, 4, 5, 7 and 8 in SLA pseudo-code. 3. In the last step, we deduce from ˆV a set of column vectors gathered in matrix ˆU such that ˆU ⊤ˆV provides an accurate approximation of M (k). First, using the Gram-Schmidt process, we find ˆR such that ˆV ˆR is an orthonormal matrix and compute ˆU = 1 δ A ˆV ˆR ˆR⊤in a streaming manner as in Step 2. Then, ˆU ˆV ⊤= 1 δ A ˆV ˆR( ˆV ˆR)⊤where ˆV ˆR( ˆV ˆR)⊤approximates the projection matrix onto the linear span of the top k right singular vectors of M. Thus, ˆU ˆV ⊤is close to M (k). This last step is described in Lines 6, 9, 10 and 11 in SLA pseudo-code. In the next subsections, we present in more details the rationale behind the three steps of SLA, and provide a performance analysis of the algorithm. 3.1 Step 1. Estimating right-singular vectors of the first batch of columns The objective of the first step is to estimate V (B) 1:k , those components of the top k right singular vectors of M whose indexes are in the set B (remember that B is the set of indexes of the ℓfirst observed columns). This estimator, denoted by Q, is obtained by applying the power method to extract the top k right singular vector of M(B), as described in Algorithm 2. In the design of this algorithm and its performance analysis, we face two challenges: (i) we only have access to a sampled version A(B) of M(B); and (ii) UΣ(V (B))⊤is not the SVD of M(B) since the column vectors of V (B) 1:k are not orthonormal in general (we keep the components of these vectors corresponding to the set of indexes B). Hence, the top k right singular vectors of M(B) that we extract in Algorithm 2 do not necessarily correspond to V (B) 1:k . To address (i), in Algorithm 2, we do not directly extract the top k right singular vectors of A(B). We first remove the rows of A(B) with too many non-zero entries (i.e., too many observed entries from M(B)), since these rows would perturb the SVD of A(B). Let us denote by ¯A the obtained trimmed matrix. We then form the covariance matrix ¯A⊤¯A, and remove its diagonal entries to obtain the matrix Φ = ¯A⊤¯A −diag( ¯A⊤¯A). Removing the diagonal entries is needed because of the sampling procedure. Indeed, the diagonal entries of ¯A⊤¯A scale as δ, whereas its off-diagonal entries scale as δ2. Hence, when δ is small, the diagonal entries would clearly become dominant in the spectral decomposition. We finally apply the power method to Φ to obtain Q. In the analysis of the performance of Algorithm 2, the following lemma will be instrumental, and provides an upper bound of the gap between Φ and (M(B))⊤M(B) using the matrix Bernstein inequality (Theorem 6.1 [Tro12]). All proofs are detailed in Appendix. Lemma 1 If δ ≤m−8 9 , with probability 1 −1 ℓ2 , ∥Φ −δ2(M(B))⊤M(B)∥2 ≤c1δ p mℓlog(ℓ), for some constant c1 > 1. To address (ii), we first establish in Lemma 2 that for an appropriate choice of ℓ, the column vectors of V (B) 1:k are approximately orthonormal. This lemma is of independent interest, and relates the SVD of a truncated matrix, here M(B), to that of the initial matrix M. More precisely: Lemma 2 If δ ≤m−8/9, there exists a ℓ×k matrix ¯V (B) such that its column vectors are orthonormal, and with probability 1 −exp(−m1/7), for all i ≤k satisfying that s2 i (M) ≥ n δℓ p mℓlog(ℓ), ∥p n ℓV (B) 1:i −¯V (B) 1:i ∥2 ≤m−1 3 . Note that as suggested by the above lemma, it might be impossible to recover V (B) i when the corresponding singular value si(M) is small (more precisely, when s2 i (M) ≤n δℓ p mℓlog(ℓ)). However, the singular vectors corresponding to such small singular values generate very little error for lowrank approximation. Thus, we are only interested in singular vectors whose singular values are above the threshold ( n δℓ p mℓlog(ℓ))1/2. Let k′ = max{i : s2 i (M) ≥n δℓ p mℓlog(ℓ), i ≤k}. Now to analyze the performance of Algorithm 2 when applied to A(B), we decompose Φ as Φ = δ2ℓ n ¯V (B) 1:k′ (Σ1:k′ 1:k′)2( ¯V (B) 1:k′ )⊤+ Y , where Y = Φ −δ2ℓ n ¯V (B) 1:k′ (Σ1:k′ 1:k′)2( ¯V (B) 1:k′ )⊤is a noise matrix. The 5 following lemma quantifies how noise may affect the performance of the power method, i.e., it provides an upper bound of the gap between Q and ¯V (B) 1:k′ as a function of the operator norm of the noise matrix Y : Lemma 3 With probability 1 − 1 ℓ2 , the output Q of SPCA when applied to A(B) satisfies for all i ≤k′: ∥( ¯V (B) 1:i )⊤· Q⊥∥2 ≤ 3∥Y ∥2 δ2 ℓ n si(M)2 . In the proof, we analyze the power iteration algorithm from results in [HMT11]. To complete the performance analysis of Algorithm 2, it remains to upper bound ∥Y ∥2. To this aim, we decompose Y into three terms: Y = Φ −δ2(M(B))⊤M(B) + δ2(M(B))⊤ I −U1:k′U ⊤ 1:k′ M(B)+ δ2 (M(B))⊤U1:k′U ⊤ 1:k′M(B) −ℓ n ¯V (B) 1:k′ (Σ1:k′ 1:k′)2( ¯V (B) 1:k′ )⊤ . The first term can be controlled using Lemma 1, and the last term is upper bounded using Lemma 2. Finally, the second term corresponds to the error made by ignoring the singular vectors which are not within the top k′. To estimate this term, we use the matrix Chernoff bound (Theorem 2.2 in [Tro11]), and prove that: Lemma 4 With probability 1 −exp(−m1/4), ∥(I −U1:k′U ⊤ 1:k′)M(B)∥2 2 ≤ 2 δ p mℓlog(ℓ) + ℓ ns2 k+1(M). In summary, combining the four above lemmas, we can establish that Q accurately estimates ¯V (B) 1:k : Theorem 5 If δ ≤m−8/9, with probability 1 −3 ℓ2 , the output Q of Algorithm 2 when applied to A(B) satisfies for all i ≤k: ∥( ¯V (B) 1:i )⊤· Q⊥∥2 ≤ 3δ2(s2 k+1(M)+2m 2 3 n)+3(2+c1)δ n ℓ √ mℓlog(ℓ) δ2s2 i (M) , where c1 is the constant from Lemma 1. 3.2 Step 2: Estimating the principal right singular vectors of M In this step, we aim at estimating the top k right singular vectors V1:k, or at least at producing k vectors whose linear span approximates that of V1:k. Towards this objective, we start from Q derived in the previous step, and define the (m × k) matrix W = A(B2)Q. W is stored and kept in memory for the remaining of the algorithm. It is tempting to directly read from W the top k′ left singular vectors U1:k′. Indeed, we know that Q ≈p n ℓV (B) 1:k , and E[A(B2)] = δUΣ(V (B))⊤, and hence E[W] ≈δp n ℓU1:kΣ1:k 1:k. However, the level of the noise in W is too important so as to accurately extract U1:k′. In turn, W can be written as δUΣ(V (B))⊤Q + Z, where Z = (A(B2) −δUΣ(V (B))⊤)Q partly captures the noise in W. It is then easy to see that the level of the noise Z satisfies E[∥Z∥2] ≥E[∥Z∥F / √ k] = Ω( √ δm). Indeed, first observe that Z is of rank k. Then E[∥Z∥2 F ] = Pm i=1 Pk j=1 E[Z2 ij] ≈mkδ: this is due to the facts that (i) Q and A(B2) −δUΣ(V (B))⊤are independent (since A(B1) and A(B2) are independent), (ii) ∥Qj∥2 2 = 1 for all j ≤k, and (iii) the entries of A(B2) are independent with variance Θ(δ(1 −δ)). However, for all j ≤k′, the j-th singular value of δUΣ(V (B))⊤Q scales as O(δ √ mℓ) = O( q δm log(m)), since sj(M) ≤√mn and sj(M(B)) ≈ q ℓ nsj(M) when j ≤k′ from Lemma 2. Instead, from W, A(B1) and the subsequent sampled arriving columns At, t > ℓ, we produce a (n × k) matrix ˆV whose linear span approximates that of V1:k′. More precisely, we first let ˆV (B) = A⊤ (B1)W. Then for all t = ℓ+ 1, . . . , n, we define ˆV t = A⊤ t W, where At is obtained from the t-th observed column of M after sampling each of its entries at rate δ. Multiplying W by ˜A = [A(B1), Aℓ+1, . . . , An] amplifies the useful signal in W, so that ˆV = ˜A⊤W constitutes a good approximation of V1:k. To understand why, we can rewrite ˆV as follows: ˆV = δ2M ⊤M(B)Q + δM ⊤(A(B2) −δM(B))Q + ( ˜A −δM)⊤W. 6 In the above equation, the first term corresponds to the useful signal and the two remaining terms constitute noise matrices. From Theorem 5, the linear span of columns of Q approximates that of the columns of ¯V (B) and thus, for j ≤k′, sj(δ2M ⊤M(B)Q) ≈δ2s2 j(M) q ℓ n ≥δ p mn log(ℓ). The spectral norms of the noise matrices are bounded using random matrix arguments, and the fact that (A(B2) −δM(B)) and ( ˜A −δM) are zero-mean random matrices with independent entries. We can show (see Lemma 14 given in the supplementary material) using the independence of A(B1) and A(B2) that with high probability, ∥δM ⊤(A(B2) −δM(B))Q∥2 = O(δ√mn). We may also establish that with high probability, ∥( ˜A −δM)⊤W∥2 = O(δ p m(m + n)). This is a consequence of a result derived in [AM07] (quoted in Lemma 13 in the supplementary material) stating that with high probability, ∥˜A −δM∥= O( p δ(m + n)) and of the fact that due to the trimming process presented in Line 3 in Algorithm 1, ∥W∥2 = O( √ δm). In summary, as soon as n scales at least as m, the noise level becomes negligible, and the span of ˆV1:k′ provides an accurate approximation of that of V1:k′. The above arguments are made precise and rigorous in the supplementary material. The following theorem summarizes the accuracy of our estimator of V1:k. Theorem 6 With log4(m) m ≤δ ≤m−8 9 for all i ≤k, there exists a constant c2 such that with probability 1 −kδ, ∥V ⊤ i ( ˆV1:k)⊥∥2 ≤c2 s2 k+1(M)+n log(m)√ m/δ+m√ n log(m)/δ s2 i (M) . 3.3 Step 3: Estimating the principal left singular vectors of M In the last step, we estimate the principal left singular vectors of M to finally derive an estimator of M (k), the optimal rank-k approximation of M. The construction of this estimator is based on the observation that M (k) = U1:kΣ1:k 1:kV ⊤ 1:k = MPV1:k, where PV1:k = V1:kV ⊤ 1:k is an (n × n) matrix representing the projection onto the linear span of the top k right singular vectors V1:k of M. Hence to estimate M (k), we try to approximate the matrix PV1:k. To this aim, we construct a (k ×k) matrix ˆR so that the column vectors of ˆV ˆR form an orthonormal basis whose span corresponds to that of the column vectors of ˆV . This construction is achieved using Gram-Schmidt process. We then approximate PV1:k by P ˆV = ˆV ˆR ˆR⊤ˆV ⊤, and finally our estimator ˆ M (k) of M (k) is 1 δ ˜AP ˆV . The construction of ˆ M (k) can be made in a memory efficient way accommodating for our streaming model where the columns of M arrive one after the other, as described in the pseudo-code of SLA. First, after constructing ˆV (B) in Step 2, we build the matrix ˆI = A(B1) ˆV (B). Then, for t = ℓ+ 1, . . . , n, after constructing the t-th line ˆV t of ˆV , we update ˆI by adding to it the matrix At ˆV t, so that after all columns of M are observed, ˆI = ˜A ˆV . Hence we can build an estimator ˆU of the principal left singular vectors of M as ˆU = 1 δ ˆI ˆR ˆR⊤, and finally obtain ˆ M (k) = | ˆU ˆV ⊤|1 0. To quantify the estimation error of ˆ M (k), we decompose M (k) −ˆ M (k) as: M (k) −ˆ M (k) = M (k)(I −P ˆV ) + (M (k) −M)P ˆV + (M −1 δ ˜A)P ˆV . The first term of the r.h.s. of the above equation can be bounded using Theorem 6: for i ≤k, we have si(M)2∥V ⊤ i ˆV⊥∥≤z = c2(s2 k+1(M) + n log(m) p m/δ + m p n log(m)/δ), and hence we can conclude that for all i ≤k,
si(M)UiV ⊤ i (I −P ˆV )
2 F ≤z. The second term can be easily bounded observing that the matrix (M (k) −M)P ˆV is of rank k: ∥(M (k) −M)P ˆV ∥2 F ≤k∥(M (k) −M)P ˆV ∥2 2 ≤k∥M (k) −M∥2 2 = ksk+1(M)2. The last term in the r.h.s. can be controlled as in the performance analysis of Step 2, and observing that ( 1 δ ˜A−M)P ˆV is of rank k: ∥ 1 δ ˜A −M P ˆV ∥2 F ≤k
1 δ ˜A −M
2 2 = O(kδ(m+n)). It is then easy to remark that for the range of the parameter δ we are interested in, the upper bound z of the first term dominates the upper bound of the two other terms. Finally, we obtain the following result (see the supplementary material for a complete proof): Theorem 7 When log4(m) m ≤δ ≤m−8 9 , with probability 1 −kδ, the output of the SLA algorithm satisfies with constant c3: ∥M (k)−[ ˆU ˆV ⊤]1 0∥2 F mn = c3k2 s2 k+1(M) mn + log(m) √ δm + q log(m) δn . 7 Note that if log4(m) m ≤δ ≤m−8 9 , then log(m) √ δm = o(1). Hence if n ≥m, the SLA algorithm provides an asymptotically accurate estimate of M (k) as soon as sk+1(M)2 mn = o(1). 3.4 Required Memory and Running Time Required memory. Lines 1-6 in SLA pseudo-code. A(B1) and A(B2) have O(δmℓ) non-zero entries and we need O(δmℓlog m) bits to store the id of these entries. Similarly, the memory required to store Φ is O(δ2mℓ2 log(ℓ)). Storing Q further requires O(ℓk) memory. Finally, ˆV (B1) and ˆI computed in Line 6 require O(ℓk) and O(km) memory space, respectively. Thus, when ℓ= 1 δ log m, this first part of the algorithm requires O(k(m + n)) memory. Lines 7-9. Before we treat the remaining columns, A(B1), A(B2), and Q are removed from the memory. Using this released memory, when the t-th column arrives, we can store it, compute ˆV t and ˆI, and remove the column to save memory. Therefore, we do not need additional memory to treat the remaining columns. Lines 10 and 11. From ˆI and ˆV , we compute ˆU. To this aim, the memory required is O(k(m + n)). Running time. From line 1 to 6. The SPCA algorithm requires O(ℓk(δ2mℓ+k) log(ℓ)) floating-point operations to compute Q. W, ˆV , and ˆI are inner products, and their computations require O(δkmℓ) operations. With ℓ= 1 δ log(m), the number of operations to treat the first ℓcolumns is O(ℓk(δ2mℓ+ k) log(ℓ) + kδmℓ) = O(km) + O( k2 δ ). From line 7 to 9. To compute ˆV t and ˆI when the t-th column arrives, we need O(δkm) operations. Since there are n −ℓremaining columns, the total number of operations is O(δkmn). Lines 10 and 11 ˆR is computed from ˆV using the Gram-Schmidt process which requires O(k2m) operations. We then compute ˆI ˆR ˆR⊤using O(k2m) operations. Hence we conclude that: In summary, we have shown that: Theorem 8 The memory required to run the SLA algorithm is O(k(m + n)). Its running time is O(δkmn + k2 δ + k2m). Observe that when δ ≥max( (log(m))4 m , (log(m))2 n ) and k ≤(log(m))6, we have δkmn ≥k2/δ ≥ k2m, and therefore, the running time of SLA is O(δkmn). 3.5 General Streaming Model SLA is a one-pass low-rank approximation algorithm, but the set of the ℓfirst observed columns of M needs to be chosen uniformly at random. We can readily extend SLA to deal with scenarios where the columns of M can be observed in an arbitrary order. This extension requires two passes on M, but otherwise performs exactly the same operations as SLA. In the first pass, we extract a set of ℓcolumns chosen uniformly at random, and in the second pass, we deal with all other columns. To extract ℓrandomly selected columns in the first pass, we proceed as follows. Assume that when the t-th column of M arrives, we have already extracted l columns. Then the t-th column is extracted with probability ℓ−l n−t+1. This two-pass version of SLA enjoys the same performance guarantees as those of SLA. 4 Conclusion This paper revisited the low rank approximation problem. We proposed a streaming algorithm that samples the data and produces a near optimal solution with a vanishing mean square error. The algorithm uses a memory space scaling linearly with the ambient dimension of the matrix, i.e. the memory required to store the output alone. Its running time scales as the number of sampled entries of the input matrix. The algorithm is relatively simple, and in particular, does exploit elaborated techniques (such as sparse embedding techniques) recently developed to reduce the memory requirement and complexity of algorithms addressing various problems in linear algebra. 8 References [AM07] Dimitris Achlioptas and Frank Mcsherry. Fast computation of low-rank matrix approximations. Journal of the ACM (JACM), 54(2):9, 2007. [BJS15] Srinadh Bhojanapalli, Prateek Jain, and Sujay Sanghavi. Tighter low-rank approximation via sampling the leveraged element. In Proceedings of the Twenty-Sixth Annual ACMSIAM Symposium on Discrete Algorithms, pages 902–920. SIAM, 2015. [CW09] Kenneth L Clarkson and David P Woodruff. Numerical linear algebra in the streaming model. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 205–214. ACM, 2009. [CW13] Kenneth L Clarkson and David P Woodruff. Low rank approximation and regression in input sparsity time. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 81–90. ACM, 2013. [GP14] Mina Ghashami and Jeff M Phillips. Relative errors for deterministic low-rank matrix approximations. In SODA, pages 707–717. SIAM, 2014. [HMT11] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011. [Lib13] Edo Liberty. Simple and deterministic matrix sketching. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 581– 588. ACM, 2013. [MCJ13] Ioannis Mitliagkas, Constantine Caramanis, and Prateek Jain. Memory limited, streaming PCA. In Advances in Neural Information Processing Systems, 2013. [Tro11] Joel A Tropp. Improved analysis of the subsampled randomized hadamard transform. Advances in Adaptive Data Analysis, 3(01n02):115–126, 2011. [Tro12] Joel A Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4):389–434, 2012. [Woo14] David Woodruff. Low rank approximation lower bounds in row-update streams. In Advances in Neural Information Processing Systems, pages 1781–1789, 2014. 9 | 2015 | 57 |
5,947 | Character-level Convolutional Networks for Text Classification∗ Xiang Zhang Junbo Zhao Yann LeCun Courant Institute of Mathematical Sciences, New York University 719 Broadway, 12th Floor, New York, NY 10003 {xiang, junbo.zhao, yann}@cs.nyu.edu Abstract This article offers an empirical exploration on the use of character-level convolutional networks (ConvNets) for text classification. We constructed several largescale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results. Comparisons are offered against traditional models such as bag of words, n-grams and their TFIDF variants, and deep learning models such as word-based ConvNets and recurrent neural networks. 1 Introduction Text classification is a classic topic for natural language processing, in which one needs to assign predefined categories to free-text documents. The range of text classification research goes from designing the best features to choosing the best possible machine learning classifiers. To date, almost all techniques of text classification are based on words, in which simple statistics of some ordered word combinations (such as n-grams) usually perform the best [12]. On the other hand, many researchers have found convolutional networks (ConvNets) [17] [18] are useful in extracting information from raw signals, ranging from computer vision applications to speech recognition and others. In particular, time-delay networks used in the early days of deep learning research are essentially convolutional networks that model sequential data [1] [31]. In this article we explore treating text as a kind of raw signal at character level, and applying temporal (one-dimensional) ConvNets to it. For this article we only used a classification task as a way to exemplify ConvNets’ ability to understand texts. Historically we know that ConvNets usually require large-scale datasets to work, therefore we also build several of them. An extensive set of comparisons is offered with traditional models and other deep learning models. Applying convolutional networks to text classification or natural language processing at large was explored in literature. It has been shown that ConvNets can be directly applied to distributed [6] [16] or discrete [13] embedding of words, without any knowledge on the syntactic or semantic structures of a language. These approaches have been proven to be competitive to traditional models. There are also related works that use character-level features for language processing. These include using character-level n-grams with linear classifiers [15], and incorporating character-level features to ConvNets [28] [29]. In particular, these ConvNet approaches use words as a basis, in which character-level features extracted at word [28] or word n-gram [29] level form a distributed representation. Improvements for part-of-speech tagging and information retrieval were observed. This article is the first to apply ConvNets only on characters. We show that when trained on largescale datasets, deep ConvNets do not require the knowledge of words, in addition to the conclusion ∗An early version of this work entitled “Text Understanding from Scratch” was posted in Feb 2015 as arXiv:1502.01710. The present paper has considerably more experimental results and a rewritten introduction. 1 from previous research that ConvNets do not require the knowledge about the syntactic or semantic structure of a language. This simplification of engineering could be crucial for a single system that can work for different languages, since characters always constitute a necessary construct regardless of whether segmentation into words is possible. Working on only characters also has the advantage that abnormal character combinations such as misspellings and emoticons may be naturally learnt. 2 Character-level Convolutional Networks In this section, we introduce the design of character-level ConvNets for text classification. The design is modular, where the gradients are obtained by back-propagation [27] to perform optimization. 2.1 Key Modules The main component is the temporal convolutional module, which simply computes a 1-D convolution. Suppose we have a discrete input function g(x) ∈[1, l] →R and a discrete kernel function f(x) ∈[1, k] →R. The convolution h(y) ∈[1, ⌊(l −k + 1)/d⌋] →R between f(x) and g(x) with stride d is defined as h(y) = k X x=1 f(x) · g(y · d −x + c), where c = k −d + 1 is an offset constant. Just as in traditional convolutional networks in vision, the module is parameterized by a set of such kernel functions fij(x) (i = 1, 2, . . . , m and j = 1, 2, . . . , n) which we call weights, on a set of inputs gi(x) and outputs hj(y). We call each gi (or hj) input (or output) features, and m (or n) input (or output) feature size. The outputs hj(y) is obtained by a sum over i of the convolutions between gi(x) and fij(x). One key module that helped us to train deeper models is temporal max-pooling. It is the 1-D version of the max-pooling module used in computer vision [2]. Given a discrete input function g(x) ∈ [1, l] →R, the max-pooling function h(y) ∈[1, ⌊(l −k + 1)/d⌋] →R of g(x) is defined as h(y) = k max x=1 g(y · d −x + c), where c = k −d + 1 is an offset constant. This very pooling module enabled us to train ConvNets deeper than 6 layers, where all others fail. The analysis by [3] might shed some light on this. The non-linearity used in our model is the rectifier or thresholding function h(x) = max{0, x}, which makes our convolutional layers similar to rectified linear units (ReLUs) [24]. The algorithm used is stochastic gradient descent (SGD) with a minibatch of size 128, using momentum [26] [30] 0.9 and initial step size 0.01 which is halved every 3 epoches for 10 times. Each epoch takes a fixed number of random training samples uniformly sampled across classes. This number will later be detailed for each dataset sparately. The implementation is done using Torch 7 [4]. 2.2 Character quantization Our models accept a sequence of encoded characters as input. The encoding is done by prescribing an alphabet of size m for the input language, and then quantize each character using 1-of-m encoding (or “one-hot” encoding). Then, the sequence of characters is transformed to a sequence of such m sized vectors with fixed length l0. Any character exceeding length l0 is ignored, and any characters that are not in the alphabet including blank characters are quantized as all-zero vectors. The character quantization order is backward so that the latest reading on characters is always placed near the begin of the output, making it easy for fully connected layers to associate weights with the latest reading. The alphabet used in all of our models consists of 70 characters, including 26 english letters, 10 digits, 33 other characters and the new line character. The non-space characters are: abcdefghijklmnopqrstuvwxyz0123456789 -,;.!?:’’’/\|_@#$%ˆ&*˜‘+-=<>()[]{} Later we also compare with models that use a different alphabet in which we distinguish between upper-case and lower-case letters. 2 2.3 Model Design We designed 2 ConvNets – one large and one small. They are both 9 layers deep with 6 convolutional layers and 3 fully-connected layers. Figure 1 gives an illustration. Some Text Convolutions Max-pooling Length Feature Quantization ... Conv. and Pool. layers Fully-connected Figure 1: Illustration of our model The input have number of features equal to 70 due to our character quantization method, and the input feature length is 1014. It seems that 1014 characters could already capture most of the texts of interest. We also insert 2 dropout [10] modules in between the 3 fully-connected layers to regularize. They have dropout probability of 0.5. Table 1 lists the configurations for convolutional layers, and table 2 lists the configurations for fully-connected (linear) layers. Table 1: Convolutional layers used in our experiments. The convolutional layers have stride 1 and pooling layers are all non-overlapping ones, so we omit the description of their strides. Layer Large Feature Small Feature Kernel Pool 1 1024 256 7 3 2 1024 256 7 3 3 1024 256 3 N/A 4 1024 256 3 N/A 5 1024 256 3 N/A 6 1024 256 3 3 We initialize the weights using a Gaussian distribution. The mean and standard deviation used for initializing the large model is (0, 0.02) and small model (0, 0.05). Table 2: Fully-connected layers used in our experiments. The number of output units for the last layer is determined by the problem. For example, for a 10-class classification problem it will be 10. Layer Output Units Large Output Units Small 7 2048 1024 8 2048 1024 9 Depends on the problem For different problems the input lengths may be different (for example in our case l0 = 1014), and so are the frame lengths. From our model design, it is easy to know that given input length l0, the output frame length after the last convolutional layer (but before any of the fully-connected layers) is l6 = (l0 −96)/27. This number multiplied with the frame size at layer 6 will give the input dimension the first fully-connected layer accepts. 2.4 Data Augmentation using Thesaurus Many researchers have found that appropriate data augmentation techniques are useful for controlling generalization error for deep learning models. These techniques usually work well when we could find appropriate invariance properties that the model should possess. In terms of texts, it is not reasonable to augment the data using signal transformations as done in image or speech recognition, because the exact order of characters may form rigorous syntactic and semantic meaning. Therefore, 3 the best way to do data augmentation would have been using human rephrases of sentences, but this is unrealistic and expensive due the large volume of samples in our datasets. As a result, the most natural choice in data augmentation for us is to replace words or phrases with their synonyms. We experimented data augmentation by using an English thesaurus, which is obtained from the mytheas component used in LibreOffice1 project. That thesaurus in turn was obtained from WordNet [7], where every synonym to a word or phrase is ranked by the semantic closeness to the most frequently seen meaning. To decide on how many words to replace, we extract all replaceable words from the given text and randomly choose r of them to be replaced. The probability of number r is determined by a geometric distribution with parameter p in which P[r] ∼pr. The index s of the synonym chosen given a word is also determined by a another geometric distribution in which P[s] ∼qs. This way, the probability of a synonym chosen becomes smaller when it moves distant from the most frequently seen meaning. We will report the results using this new data augmentation technique with p = 0.5 and q = 0.5. 3 Comparison Models To offer fair comparisons to competitive models, we conducted a series of experiments with both traditional and deep learning methods. We tried our best to choose models that can provide comparable and competitive results, and the results are reported faithfully without any model selection. 3.1 Traditional Methods We refer to traditional methods as those that using a hand-crafted feature extractor and a linear classifier. The classifier used is a multinomial logistic regression in all these models. Bag-of-words and its TFIDF. For each dataset, the bag-of-words model is constructed by selecting 50,000 most frequent words from the training subset. For the normal bag-of-words, we use the counts of each word as the features. For the TFIDF (term-frequency inverse-document-frequency) [14] version, we use the counts as the term-frequency. The inverse document frequency is the logarithm of the division between total number of samples and number of samples with the word in the training subset. The features are normalized by dividing the largest feature value. Bag-of-ngrams and its TFIDF. The bag-of-ngrams models are constructed by selecting the 500,000 most frequent n-grams (up to 5-grams) from the training subset for each dataset. The feature values are computed the same way as in the bag-of-words model. Bag-of-means on word embedding. We also have an experimental model that uses k-means on word2vec [23] learnt from the training subset of each dataset, and then use these learnt means as representatives of the clustered words. We take into consideration all the words that appeared more than 5 times in the training subset. The dimension of the embedding is 300. The bag-of-means features are computed the same way as in the bag-of-words model. The number of means is 5000. 3.2 Deep Learning Methods Recently deep learning methods have started to be applied to text classification. We choose two simple and representative models for comparison, in which one is word-based ConvNet and the other a simple long-short term memory (LSTM) [11] recurrent neural network model. Word-based ConvNets. Among the large number of recent works on word-based ConvNets for text classification, one of the differences is the choice of using pretrained or end-to-end learned word representations. We offer comparisons with both using the pretrained word2vec [23] embedding [16] and using lookup tables [5]. The embedding size is 300 in both cases, in the same way as our bagof-means model. To ensure fair comparison, the models for each case are of the same size as our character-level ConvNets, in terms of both the number of layers and each layer’s output size. Experiments using a thesaurus for data augmentation are also conducted. 1http://www.libreoffice.org/ 4 LSTM LSTM LSTM ... Mean Figure 2: long-short term memory Long-short term memory. We also offer a comparison with a recurrent neural network model, namely long-short term memory (LSTM) [11]. The LSTM model used in our case is word-based, using pretrained word2vec embedding of size 300 as in previous models. The model is formed by taking mean of the outputs of all LSTM cells to form a feature vector, and then using multinomial logistic regression on this feature vector. The output dimension is 512. The variant of LSTM we used is the common “vanilla” architecture [8] [9]. We also used gradient clipping [25] in which the gradient norm is limited to 5. Figure 2 gives an illustration. 3.3 Choice of Alphabet For the alphabet of English, one apparent choice is whether to distinguish between upper-case and lower-case letters. We report experiments on this choice and observed that it usually (but not always) gives worse results when such distinction is made. One possible explanation might be that semantics do not change with different letter cases, therefore there is a benefit of regularization. 4 Large-scale Datasets and Results Previous research on ConvNets in different areas has shown that they usually work well with largescale datasets, especially when the model takes in low-level raw features like characters in our case. However, most open datasets for text classification are quite small, and large-scale datasets are splitted with a significantly smaller training set than testing [21]. Therefore, instead of confusing our community more by using them, we built several large-scale datasets for our experiments, ranging from hundreds of thousands to several millions of samples. Table 3 is a summary. Table 3: Statistics of our large-scale datasets. Epoch size is the number of minibatches in one epoch Dataset Classes Train Samples Test Samples Epoch Size AG’s News 4 120,000 7,600 5,000 Sogou News 5 450,000 60,000 5,000 DBPedia 14 560,000 70,000 5,000 Yelp Review Polarity 2 560,000 38,000 5,000 Yelp Review Full 5 650,000 50,000 5,000 Yahoo! Answers 10 1,400,000 60,000 10,000 Amazon Review Full 5 3,000,000 650,000 30,000 Amazon Review Polarity 2 3,600,000 400,000 30,000 AG’s news corpus. We obtained the AG’s corpus of news article on the web2. It contains 496,835 categorized news articles from more than 2000 news sources. We choose the 4 largest classes from this corpus to construct our dataset, using only the title and description fields. The number of training samples for each class is 30,000 and testing 1900. Sogou news corpus. This dataset is a combination of the SogouCA and SogouCS news corpora [32], containing in total 2,909,551 news articles in various topic channels. We then labeled each piece of news using its URL, by manually classifying the their domain names. This gives us a large corpus of news articles labeled with their categories. There are a large number categories but most of them contain only few articles. We choose 5 categories – “sports”, “finance”, “entertainment”, “automobile” and “technology”. The number of training samples selected for each class is 90,000 and testing 12,000. Although this is a dataset in Chinese, we used pypinyin package combined with jieba Chinese segmentation system to produce Pinyin – a phonetic romanization of Chinese. The models for English can then be applied to this dataset without change. The fields used are title and content. 2http://www.di.unipi.it/˜gulli/AG_corpus_of_news_articles.html 5 Table 4: Testing errors of all the models. Numbers are in percentage. “Lg” stands for “large” and “Sm” stands for “small”. “w2v” is an abbreviation for “word2vec”, and “Lk” for “lookup table”. “Th” stands for thesaurus. ConvNets labeled “Full” are those that distinguish between lower and upper letters Model AG Sogou DBP. Yelp P. Yelp F. Yah. A. Amz. F. Amz. P. BoW 11.19 7.15 3.39 7.76 42.01 31.11 45.36 9.60 BoW TFIDF 10.36 6.55 2.63 6.34 40.14 28.96 44.74 9.00 ngrams 7.96 2.92 1.37 4.36 43.74 31.53 45.73 7.98 ngrams TFIDF 7.64 2.81 1.31 4.56 45.20 31.49 47.56 8.46 Bag-of-means 16.91 10.79 9.55 12.67 47.46 39.45 55.87 18.39 LSTM 13.94 4.82 1.45 5.26 41.83 29.16 40.57 6.10 Lg. w2v Conv. 9.92 4.39 1.42 4.60 40.16 31.97 44.40 5.88 Sm. w2v Conv. 11.35 4.54 1.71 5.56 42.13 31.50 42.59 6.00 Lg. w2v Conv. Th. 9.91 1.37 4.63 39.58 31.23 43.75 5.80 Sm. w2v Conv. Th. 10.88 1.53 5.36 41.09 29.86 42.50 5.63 Lg. Lk. Conv. 8.55 4.95 1.72 4.89 40.52 29.06 45.95 5.84 Sm. Lk. Conv. 10.87 4.93 1.85 5.54 41.41 30.02 43.66 5.85 Lg. Lk. Conv. Th. 8.93 1.58 5.03 40.52 28.84 42.39 5.52 Sm. Lk. Conv. Th. 9.12 1.77 5.37 41.17 28.92 43.19 5.51 Lg. Full Conv. 9.85 8.80 1.66 5.25 38.40 29.90 40.89 5.78 Sm. Full Conv. 11.59 8.95 1.89 5.67 38.82 30.01 40.88 5.78 Lg. Full Conv. Th. 9.51 1.55 4.88 38.04 29.58 40.54 5.51 Sm. Full Conv. Th. 10.89 1.69 5.42 37.95 29.90 40.53 5.66 Lg. Conv. 12.82 4.88 1.73 5.89 39.62 29.55 41.31 5.51 Sm. Conv. 15.65 8.65 1.98 6.53 40.84 29.84 40.53 5.50 Lg. Conv. Th. 13.39 1.60 5.82 39.30 28.80 40.45 4.93 Sm. Conv. Th. 14.80 1.85 6.49 40.16 29.84 40.43 5.67 DBPedia ontology dataset. DBpedia is a crowd-sourced community effort to extract structured information from Wikipedia [19]. The DBpedia ontology dataset is constructed by picking 14 nonoverlapping classes from DBpedia 2014. From each of these 14 ontology classes, we randomly choose 40,000 training samples and 5,000 testing samples. The fields we used for this dataset contain title and abstract of each Wikipedia article. Yelp reviews. The Yelp reviews dataset is obtained from the Yelp Dataset Challenge in 2015. This dataset contains 1,569,264 samples that have review texts. Two classification tasks are constructed from this dataset – one predicting full number of stars the user has given, and the other predicting a polarity label by considering stars 1 and 2 negative, and 3 and 4 positive. The full dataset has 130,000 training samples and 10,000 testing samples in each star, and the polarity dataset has 280,000 training samples and 19,000 test samples in each polarity. Yahoo! Answers dataset. We obtained Yahoo! Answers Comprehensive Questions and Answers version 1.0 dataset through the Yahoo! Webscope program. The corpus contains 4,483,032 questions and their answers. We constructed a topic classification dataset from this corpus using 10 largest main categories. Each class contains 140,000 training samples and 5,000 testing samples. The fields we used include question title, question content and best answer. Amazon reviews. We obtained an Amazon review dataset from the Stanford Network Analysis Project (SNAP), which spans 18 years with 34,686,770 reviews from 6,643,669 users on 2,441,053 products [22]. Similarly to the Yelp review dataset, we also constructed 2 datasets – one full score prediction and another polarity prediction. The full dataset contains 600,000 training samples and 130,000 testing samples in each class, whereas the polarity dataset contains 1,800,000 training samples and 200,000 testing samples in each polarity sentiment. The fields used are review title and review content. Table 4 lists all the testing errors we obtained from these datasets for all the applicable models. Note that since we do not have a Chinese thesaurus, the Sogou News dataset does not have any results using thesaurus augmentation. We labeled the best result in blue and worse result in red. 6 5 Discussion 0.00% 10.00% 20.00% 30.00% 40.00% 50.00% 60.00% 70.00% 80.00% 90.00% (a) Bag-of-means -100.00% -80.00% -60.00% -40.00% -20.00% 0.00% 20.00% 40.00% 60.00% (b) n-grams TFIDF -15.00% -10.00% -5.00% 0.00% 5.00% 10.00% 15.00% 20.00% 25.00% (c) LSTM -40.00% -30.00% -20.00% -10.00% 0.00% 10.00% 20.00% (d) word2vec ConvNet -60.00% -50.00% -40.00% -30.00% -20.00% -10.00% 0.00% 10.00% 20.00% (e) Lookup table ConvNet -50.00% -40.00% -30.00% -20.00% -10.00% 0.00% 10.00% 20.00% (f) Full alphabet ConvNet AG News DBPedia Yelp P. Yelp F. Yahoo A. Amazon F. Amazon P. Figure 3: Relative errors with comparison models To understand the results in table 4 further, we offer some empirical analysis in this section. To facilitate our analysis, we present the relative errors in figure 3 with respect to comparison models. Each of these plots is computed by taking the difference between errors on comparison model and our character-level ConvNet model, then divided by the comparison model error. All ConvNets in the figure are the large models with thesaurus augmentation respectively. Character-level ConvNet is an effective method. The most important conclusion from our experiments is that character-level ConvNets could work for text classification without the need for words. This is a strong indication that language could also be thought of as a signal no different from any other kind. Figure 4 shows 12 random first-layer patches learnt by one of our character-level ConvNets for DBPedia dataset. Figure 4: First layer weights. For each patch, height is the kernel size and width the alphabet size Dataset size forms a dichotomy between traditional and ConvNets models. The most obvious trend coming from all the plots in figure 3 is that the larger datasets tend to perform better. Traditional methods like n-grams TFIDF remain strong candidates for dataset of size up to several hundreds of thousands, and only until the dataset goes to the scale of several millions do we observe that character-level ConvNets start to do better. ConvNets may work well for user-generated data. User-generated data vary in the degree of how well the texts are curated. For example, in our million scale datasets, Amazon reviews tend to be raw user-inputs, whereas users might be extra careful in their writings on Yahoo! Answers. Plots comparing word-based deep models (figures 3c, 3d and 3e) show that character-level ConvNets work better for less curated user-generated texts. This property suggests that ConvNets may have better applicability to real-world scenarios. However, further analysis is needed to validate the hypothesis that ConvNets are truly good at identifying exotic character combinations such as misspellings and emoticons, as our experiments alone do not show any explicit evidence. Choice of alphabet makes a difference. Figure 3f shows that changing the alphabet by distinguishing between uppercase and lowercase letters could make a difference. For million-scale datasets, it seems that not making such distinction usually works better. One possible explanation is that there is a regularization effect, but this is to be validated. 7 Semantics of tasks may not matter. Our datasets consist of two kinds of tasks: sentiment analysis (Yelp and Amazon reviews) and topic classification (all others). This dichotomy in task semantics does not seem to play a role in deciding which method is better. Bag-of-means is a misuse of word2vec [20]. One of the most obvious facts one could observe from table 4 and figure 3a is that the bag-of-means model performs worse in every case. Comparing with traditional models, this suggests such a simple use of a distributed word representation may not give us an advantage to text classification. However, our experiments does not speak for any other language processing tasks or use of word2vec in any other way. There is no free lunch. Our experiments once again verifies that there is not a single machine learning model that can work for all kinds of datasets. The factors discussed in this section could all play a role in deciding which method is the best for some specific application. 6 Conclusion and Outlook This article offers an empirical study on character-level convolutional networks for text classification. We compared with a large number of traditional and deep learning models using several largescale datasets. On one hand, analysis shows that character-level ConvNet is an effective method. On the other hand, how well our model performs in comparisons depends on many factors, such as dataset size, whether the texts are curated and choice of alphabet. In the future, we hope to apply character-level ConvNets for a broader range of language processing tasks especially when structured outputs are needed. Acknowledgement We gratefully acknowledge the support of NVIDIA Corporation with the donation of 2 Tesla K40 GPUs used for this research. We gratefully acknowledge the support of Amazon.com Inc for an AWS in Education Research grant used for this research. References [1] L. Bottou, F. Fogelman Souli´e, P. Blanchet, and J. Lienard. Experiments with time delay networks and dynamic time warping for speaker independent isolated digit recognition. In Proceedings of EuroSpeech 89, volume 2, pages 537–540, Paris, France, 1989. [2] Y.-L. Boureau, F. Bach, Y. LeCun, and J. Ponce. Learning mid-level features for recognition. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on, pages 2559–2566. IEEE, 2010. [3] Y.-L. Boureau, J. Ponce, and Y. LeCun. A theoretical analysis of feature pooling in visual recognition. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 111–118, 2010. [4] R. Collobert, K. Kavukcuoglu, and C. Farabet. Torch7: A matlab-like environment for machine learning. In BigLearn, NIPS Workshop, number EPFL-CONF-192376, 2011. [5] R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, Nov. 2011. [6] C. dos Santos and M. Gatti. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 69–78, Dublin, Ireland, August 2014. Dublin City University and Association for Computational Linguistics. [7] C. Fellbaum. Wordnet and wordnets. In K. Brown, editor, Encyclopedia of Language and Linguistics, pages 665–670, Oxford, 2005. Elsevier. [8] A. Graves and J. Schmidhuber. Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Networks, 18(5):602–610, 2005. [9] K. Greff, R. K. Srivastava, J. Koutn´ık, B. R. Steunebrink, and J. Schmidhuber. LSTM: A search space odyssey. CoRR, abs/1503.04069, 2015. [10] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. 8 [11] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735–1780, Nov. 1997. [12] T. Joachims. Text categorization with suport vector machines: Learning with many relevant features. In Proceedings of the 10th European Conference on Machine Learning, pages 137–142. Springer-Verlag, 1998. [13] R. Johnson and T. Zhang. Effective use of word order for text categorization with convolutional neural networks. CoRR, abs/1412.1058, 2014. [14] K. S. Jones. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28(1):11–21, 1972. [15] I. Kanaris, K. Kanaris, I. Houvardas, and E. Stamatatos. Words versus character n-grams for anti-spam filtering. International Journal on Artificial Intelligence Tools, 16(06):1047–1067, 2007. [16] Y. Kim. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1746–1751, Doha, Qatar, October 2014. Association for Computational Linguistics. [17] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, and L. D. Jackel. Backpropagation applied to handwritten zip code recognition. Neural Computation, 1(4):541–551, Winter 1989. [18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, November 1998. [19] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. van Kleef, S. Auer, and C. Bizer. DBpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal, 2014. [20] G. Lev, B. Klein, and L. Wolf. In defense of word embedding for generic text representation. In C. Biemann, S. Handschuh, A. Freitas, F. Meziane, and E. Mtais, editors, Natural Language Processing and Information Systems, volume 9103 of Lecture Notes in Computer Science, pages 35–50. Springer International Publishing, 2015. [21] D. D. Lewis, Y. Yang, T. G. Rose, and F. Li. Rcv1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361–397, 2004. [22] J. McAuley and J. Leskovec. Hidden factors and hidden topics: Understanding rating dimensions with review text. In Proceedings of the 7th ACM Conference on Recommender Systems, RecSys ’13, pages 165–172, New York, NY, USA, 2013. ACM. [23] T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean. Distributed representations of words and phrases and their compositionality. In C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 3111–3119. 2013. [24] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 807–814, 2010. [25] R. Pascanu, T. Mikolov, and Y. Bengio. On the difficulty of training recurrent neural networks. In ICML 2013, volume 28 of JMLR Proceedings, pages 1310–1318. JMLR.org, 2013. [26] B. Polyak. Some methods of speeding up the convergence of iteration methods. {USSR} Computational Mathematics and Mathematical Physics, 4(5):1 – 17, 1964. [27] D. Rumelhart, G. Hintont, and R. Williams. Learning representations by back-propagating errors. Nature, 323(6088):533–536, 1986. [28] C. D. Santos and B. Zadrozny. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML-14), pages 1818–1826, 2014. [29] Y. Shen, X. He, J. Gao, L. Deng, and G. Mesnil. A latent semantic model with convolutional-pooling structure for information retrieval. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, pages 101–110. ACM, 2014. [30] I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and momentum in deep learning. In S. Dasgupta and D. Mcallester, editors, Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 1139–1147. JMLR Workshop and Conference Proceedings, May 2013. [31] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. J. Lang. Phoneme recognition using time-delay neural networks. Acoustics, Speech and Signal Processing, IEEE Transactions on, 37(3):328–339, 1989. [32] C. Wang, M. Zhang, S. Ma, and L. Ru. Automatic online news issue construction in web environment. In Proceedings of the 17th International Conference on World Wide Web, WWW ’08, pages 457–466, New York, NY, USA, 2008. ACM. 9 | 2015 | 58 |
5,948 | Interactive Control of Diverse Complex Characters with Neural Networks Igor Mordatch, Kendall Lowrey, Galen Andrew, Zoran Popovic, Emanuel Todorov Department of Computer Science, University of Washington {mordatch,lowrey,galen,zoran,todorov}@cs.washington.edu Abstract We present a method for training recurrent neural networks to act as near-optimal feedback controllers. It is able to generate stable and realistic behaviors for a range of dynamical systems and tasks – swimming, flying, biped and quadruped walking with different body morphologies. It does not require motion capture or task-specific features or state machines. The controller is a neural network, having a large number of feed-forward units that learn elaborate state-action mappings, and a small number of recurrent units that implement memory states beyond the physical system state. The action generated by the network is defined as velocity. Thus the network is not learning a control policy, but rather the dynamics under an implicit policy. Essential features of the method include interleaving supervised learning with trajectory optimization, injecting noise during training, training for unexpected changes in the task specification, and using the trajectory optimizer to obtain optimal feedback gains in addition to optimal actions. Figure 1: Illustration of the dynamical systems and tasks we have been able to control using the same method and architecture. See the video accompanying the submission. 1 Introduction Interactive real-time controllers that are capable of generating complex, stable and realistic movements have many potential applications including robotic control, animation and gaming. They can also serve as computational models in biomechanics and neuroscience. Traditional methods for designing such controllers are time-consuming and largely manual, relying on motion capture datasets or task-specific state machines. Our goal is to automate this process, by developing universal synthesis methods applicable to arbitrary behaviors, body morphologies, online changes in task objectives, perturbations due to noise and modeling errors. This is also the ambitious goal of much work in Reinforcement Learning and stochastic optimal control, however the goal has rarely been achieved in continuous high-dimensional spaces involving complex dynamics. Deep learning techniques on modern computers have produced remarkable results on a wide range of tasks, using methods that are not significantly different from what was used decades ago. The objective of the present paper is to design training methods that scale to larger and harder control problems, even if most of the components were already known. Specifically, we combine supervised 1 learning with trajectory optimization, namely Contact-Invariant Optimization (CIO) [12], which has given rise to some of the most elaborate motor behaviors synthesized automatically. Trajectory optimization however is an offline method, so the rationale here is to use a neural network to learn from the optimizer, and eventually generate similar behaviors online. There is closely related recent work along these lines [9, 11], but the method presented here solves substantially harder problems – in particular it yields stable and realistic locomotion in three-dimensional space, where previous work was applied to only two-dimensional characters. That this is possible is due to a number of technical improvements whose effects are analyzed below. Control was historically among the earliest applications of neural networks, but the recent surge in performance has been in computer vision, speech recognition and other classification problems that arise in artificial intelligence and machine learning, where large datasets are available. In contrast, the data needed to learn neural network controllers is much harder to obtain, and in the case of imaginary characters and novel robots we have to synthesize the training data ourselves (via trajectory optimization). At the same time the learning task for the network is harder. This is because we need precise real-valued outputs as opposed to categorical outputs, and also because our network must operate not on i.i.d. samples, but in a closed loop, where errors can amplify over time and cause instabilities. This necessitates specialized training procedures where the dataset of trajectories and the network parameters are optimized together. Another challenge caused by limited datasets is the potential for over-fitting and poor generalization. Our solution is to inject different forms of noise during training. The scale of our problem requires cloud computing and a GPU implementation, and training that takes on the order of hours. Interestingly, we invest more computing resources in generating the data than in learning from it. Thus the heavy lifting is done by the trajectory optimizer, and yet the neural network complements it in a way that yields interactive real-time control. Neural network controllers can also be trained with more traditional methods which do not involve trajectory optimization. This has been done in discrete action settings [10] as well as in continuous control settings [3, 6, 14]. A systematic comparison of these more direct methods with the present trajectory-optimization-based methods remains to be done. Nevertheless our impression is that networks trained with direct methods give rise to successful yet somewhat chaotic behaviors, while the present class of methods yield more realistic and purposeful behaviors. Using physics based controllers allows for interaction, but these controllers need specially designed architectures for each range of tasks or characters. For example, for biped location common approaches include state machines and use of simplified models (such as the inverted pendulum) and concepts (such as zero moment or capture points) [21, 18]. For quadrupedal characters, a different set of state machines, contact schedules and simplified models is used [13]. For flying and swimming yet another set of control architectures, commonly making use of explicit cyclic encodings, have been used [8, 7]. It is our aim to unity these disparate approaches. 2 Overview Let the state of the character be defined as [q f r], where q is the physical pose of the character (root position, orientation and joint angles), f are the contact forces being applied on the character by the ground, and r is the recurrent memory state of the character. The motion of the character is a state trajectory of length T defined by X = q0 f 0 r0 ... qT f T rT . Let X1, ..., XN be a collection of N trajectories, each starting with different initial conditions and executing a different task (such as moving the character to a particular location). We introduce a neural network control policy πθ : s 7→a, parametrized by neural network weights θ, that maps a sensory state of the character s at each point in time to an optimal action a that controls the character. In general, the sensory state can be designed by the user to include arbitrary informative features, but in this preliminary work we use the following simple and general-purpose representation: st = qt rt ˙qt−1 f t−1 at = ˙qt ˙rt f t , where, e.g., ˙qt ≜qt+1 −qt denotes the instantaneous rate of change of q at time t. With this representation of the action, the policy directly commands the desired velocity of the character and applied contact forces, and determines the evolution of the recurrent state r. Thus, our network learns both optimal controls and a model of dynamics simultaneously. 2 Let Ci(X) be the total cost of the trajectory X, which rewards accurate execution of task i and physical realism of the character’s motion. We want to jointly find a collection of optimal trajectories that each complete a particular task, along with a policy πθ that is able to reconstruct the sense and action pairs st(X) and at(X) of all trajectories at all timesteps: minimize θ X1 ... XN X i Ci(Xi) subject to ∀i, t : at(Xi) = πθ(st(Xi)). (1) The optimized policy parameters θ can then be used to execute policy in real-time and interactively control the character by the user. 2.1 Stochastic Policy and Sensory Inputs Injecting noise has been shown to produce more robust movement strategies in graphics and optimal control [20, 6], reduce overfitting and prevent feature co-adaptation in neural network training [4], and stabilize recurrent behaviour of neural networks [5]. We inject noise in a principled way to aid in learning policies that do not diverge when rolled out at execution time. In particular, we inject additive Gaussian noise into the sensory inputs s given to the neural network. Let the sensory noise be denoted ε ∼N(0, σ2 εI), so the resulting noisy policy inputs are s + ε. This is similar to denoising autoencoders [17] with one important difference: the change in input in our setting also induces a change in the optimal action to output. If the noise is small enough, the optimal action at nearby noisy states is given by the first order expansion a(s + ε) = a + asε, (2) where as (alternatively da ds ) is the matrix of optimal feedback gains around s. These gains can be calculated as a byproduct of trajectory optimization as described in section 3.2. Intuitively, such feedback helps the neural network trainer to learn a policy that can automatically correct for small deviations from the optimal trajectory and allows us to use much less training data. 2.2 Distributed Stochastic Optimization The resulting constrained optimization problem (1) is nonconvex and too large to solve directly. We replace the hard equality constraint with a quadratic penalty with weight α: R(s, a, θ, ε) = α 2 ∥(a + asε) −πθ(s + ε)∥2 , (3) leading to the relaxed, unconstrained objective minimize θ X1 ... XN X i Ci(Xi) + X i,t R(st(Xi), at(Xi), θ, εi,t). (4) We then proceed to solve the problem in block-alternating optimization fashion, optimizing for one set of variables while holding others fixed. In particular, we independently optimize for each Xi (trajectory optimization) and for θ (neural network regression). As the target action a+asε depends on the optimal feedback gains as, the noise ε is resampled after optimizing each policy training sub-problem. In principle the noisy sensory state and corresponding action could be recomputed within the neural network training procedure, but we found it expedient to freeze the noise during NN optimization (so that the optimal feedback gains need not be passed to the NN training process). Similar to recent stochastic optimization approaches, we introduce quadratic proximal regularization terms (weighted by rate η) that keep the solution of the current iteration close to its previous optimal value. The resulting algorithm is Algorithm 1: Distributed Stochastic Optimization 1 Sample sensor noise ¯εi,t for each t and i. 2 Optimize N trajectories (sec 3): ¯Xi = argminX Ci(X) + P t R(si,t, ai,t, ¯θ, ¯εi,t) + η 2
X −¯Xi
2 3 Solve neural network regression (sec 4): ¯θ = argminθ P i,t R(¯si,t, ¯ai,t, θ, ¯εi,t) + η 2
θ −¯θ
2 4 Repeat. 3 Thus we have reduced a complex policy search problem in (1) to an alternating sequence of independent trajectory optimization and neural network regression problems, each of which are wellstudied and allow the use of existing implementations. While previous work [9, 11] used ADMM or dual gradient descent to solve similar optimization problems, it is non-trivial to adapt them to asynchronous and stochastic setting we have. Despite potentially slower rate, we still observe convergence as shown in section 8.1. 3 Trajectory Optimization We wish to find trajectories that start with particular initial conditions and execute the task, while satisfying physical realism of the character’s motion. The existing approach we use is ContactInvariant Optimization (CIO) [12], which is a direct trajectory optimization method based on inverse dynamics. Define the total cost for a trajectory X: C(X) = X t c(φt(X)), (5) where φt(X) is a function that extracts a vector of features (such as root forces, contact distances, control torques, etc.) from the trajectory at time t and c(φ) is the state cost over these features. Physical realism is achieved by satisfying equations of motion, non-penetration, and force complementarity conditions at every point in the trajectory [12]: H(q)¨q + C(q, ˙q) = τ + J⊤(q, ˙q)f, d(q) ≥0, d(q)⊤f = 0, f ∈K(q) (6) where d(q) is the distance of the contact to the ground and K is the contact friction cone. These constraints are implemented as soft constraints, as in [12] and are included in C(X). Initial conditions are also implemented as soft constraints in C(X). Additionally we want to make sure the task is satisfied, such as moving to a particular location while minimizing effort. These task costs are the same for all our experiments and are described in section 8. Importantly, CIO is able to find solutions with trivial initializations, which makes it possible to have a broad range of characters and behaviors without requiring hand-designed controllers or motion capture for initialization. 3.1 Optimal Trajectory The trajectory optimization problem consists of finding the optimal trajectory parameters X that minimize the total cost (5) with objective (3) now folded into C for simplicity: X∗= argmin X C(X). (7) We solve the above optimization problem using Newton’s method, which requires the gradient and Hessian of the total cost function. Using the chain rule, these quantities are CX = X t ct φφt X CXX = X t (φt X)⊤ct φφφt X + ct φφt XX ≈ X t (φt X)⊤ct φφφt X where the truncation of the last term in CXX is the common Gauss-Newton Hessian approximation [1]. We choose cost functions for which cφ and cφφ can be calculated analytically. On the other hand, φX is calculated by finite differencing. The optimum can then be found by the following recursion: X∗= X∗−C−1 XXCX. (8) Because this optimization is only a sub-problem (step 2 in algorithm 1), we don’t run it to convergence, and instead take between one and ten iterations. 3.2 Optimal Feedback Gains In addition to the optimal trajectory, we also need to find optimal feedback gains as necessary to generate optimal actions for noisy inputs in (2). While these feedback gains are a byproduct of indirect trajectory optimization methods such as LQG, they are not an obvious result of direct trajectory optimization methods like CIO. While we can use Linear Quadratic Gaussian (LQG) 4 pass around our optimal solution to compute these gains, this is inefficient as it does not make use of computation already performed during direct trajectory optimization. Moreover, we found the resulting process can produce very large and ill-conditioned feedback gains. One could change the objective function for the LQG pass when calculating feedback gains to make them smoother (for example, by adding explicit trajectory smoothness cost), but then the optimal actions would be using feedback gains from a different objective. Instead, we describe a perturbation method that reuses computation done during direct trajectory optimization, also producing better-conditioned gains. This is a general method for producing feedback gains that stabilize resulting optimal trajectories and can be useful for other applications. Suppose we perturb a certain aspect of optimal trajectory X, such that the sensory state changes: s(X) = ¯s. We wish to find how the optimal action a(X) will change given this perturbation. We can enforce the perturbation with a soft constraint of weight λ, resulting in an augmented total cost: ˜C(X,¯s) = C(X) + λ 2 ∥s(X) −¯s∥2 . (9) Let ˜X(¯s) = argmin∗ X ˜C(X∗) be the optimum of the augmented total cost. For ¯s near s(X) (as is the case with local feedback control), the minimizer of augmented cost is the minimizer of a quadratic around optimal trajectory X ˜X(¯s) = X −˜C−1 XX(X,¯s) ˜CX(X,¯s) = X −(CXX + λs⊤ XsX)−1(CX + λs⊤ X(s(X) −¯s)), where all derivatives are calculated around X. Differentiating the above w.r.t. ¯s, ˜X¯s = λ(CXX + λs⊤ XsX)−1s⊤ X = C−1 XXs⊤ X(sXC−1 XXs⊤ X + 1 λI)−1, where the last equality follows from Woodbury identity and has the benefit of reusing C−1 XX, which is already computed as part of trajectory optimization. The optimal feedback gains for a are a¯s = aX ˜X¯s. Note that sX and aX are subsets of φX, and are already calculated as part of trajectory optimization. Thus, computing optimal feedback gains comes at very little additional cost. Our approach produces softer feedback gains according to parameter λ without modifying the cost function. The intuition is that instead of holding perturbed initial state fixed (as LQG does, for example), we make matching the initial state a soft constraint. By weakening this constraint, we can modify initial state to better achieve the master cost function without using very aggressive feedback. 4 Neural Network Policy Regression After performing trajectory optimization, we perform standard regression to fit a neural network to the noisy fixed input and output pairs {s + ε, a + asε}i,t for each timestep and trajectory. Our neural network policy has a total of K layers, hidden layer activation function σ (tanh, in the present work) and hidden units hk for layer k. To learn a model that is robust to small changes in neural state, we add independent Gaussian noise γk ∼N(0, σ2 γI) with variance σ2 γ to the neural activations at each layer during training. Wager et al. [19] observed this noise model makes hidden units tend toward saturated regions and less sensitive to precise values of individual units. As with the trajectory optimization sub-problems, we do not run the neural network trainer to convergence but rather perform only a single pass of batched stochastic gradient descent over the dataset before updating the parameters θ in step 3 of Algorithm 1. All our experiments use 3 hidden layer neural networks with 250 hidden units in each layer (other network sizes are evaluated in section 8.1). The neural network weight matrices are initialized with a spectral radius of just above 1, similar to [15, 5]. This helps to make sure initial network dynamics are stable and do not vanish or explode. 5 Training Trajectory Generation To train a neural network for interactive use, we required a data set that includes dynamically changing task’s goal state. The task, in this case, is the locomotion of a character to a movable goal 5 position controlled by the user. (Our character’s goal position was always set to be the origin, which encodes the characters state position in the goal position’s coordinate frame. Thus the “origin” may shift relative to the character, but this keeps behavior invariant to the global frame of reference.) Our trajectory generation creates a dataset consisting of trials and segments. Each trial k starts with a reference physical pose and null recurrent memory [q ˙q r]init and must reach goal location gk,0. After generating an optimal trajectory Xk,0 according to section 3, a random timestep t is chosen to branch a new segment with [q ˙q r]t used as the initial state. A new goal location gk,1 is also chosen randomly for optimal trajectory Xk,1. This process represents the character changing direction at some point along its original trajectory plan: “interaction” in this case is simply a new change in goal position. This technique allows for our initial states and goals to come from the distribution that reflects the character’s typical motion. In all our experiments, we use between 100 to 200 trials, each with 5 branched segments. 6 Distributed Training Architecture Our training algorithm was implemented in a asynchronous, distributed architecture, utilizing a GPU for neural network training. Simple parallelism was achieved by distributing the trajectory optimization processes to multiple node machines, while the resulting data was used to train the NN policy on a single GPU node. Amazon Web Service’s EC2 3.8xlarge instances provided the nodes for optimization, while a g2.2xlarge instance provided the GPU. Utilizing a star-topology with the GPU instance at the center, a Network File System server distributes the training data X and network parameters θ to necessary processes within the cluster. Each optimization node is assigned a subset of the total trials and segments for the given task. This simple usage of files for data storage meant no supporting infrastructure other than standard file locking for concurrency. We used a custom GPU implementation of stochastic gradient descent (SGD) to train the neural network control policy. For the first training epoch, all trajectories and action sequences are loaded onto the GPU, randomly shuffling the order of the frames. Then the neural network parameters θ are updated using batched SGD in a single pass over the data to reduce the objective in (4). At the start of subsequent training epochs, trajectories which have been updated by one of the trajectory optimization processes (and injected with new sensor noise ε) are reloaded. Although this architecture is asynchronous, the proximal regularization terms in the objective prevent the training data and policy results from changing too quickly and keep the optimization from diverging. As a result, we can increase our training performance linearly for the size of cluster we are using, to about 30 optimization nodes per GPU machine. We run the overall optimization process until the average of 200 trajectory optimization iterations has been reached across all machines. This usually results in about 10000 neural network training epochs, and takes about 2.5 hours to complete, depending on task parameters and number of nodes. 7 Policy Execution Once we find the optimal policy parameters θ offline, we can execute the resulting policy in realtime under user control. Unlike non-parametric methods like motion graphs or Gaussian Processes, we do not need to keep any trajectory data at execution time. Starting with an initial state x0, we compute sensory state s and query the policy (without noise) for the desired action ˙qdes ˙rdes f . To evolve the physical state of the system, we directly optimize the next state x1 to match ˙qdes while satisfying equations of motion x1 = argmin x
˙q −˙qdes
2 +
˙r −˙rdes
2 +
f −f des
2 subject to (6) Note that this is simply the optimization problem (7) with horizon T = 1, which can be solved at real-time rates and does not require any additional implementation. This approach is reminiscent of feature-based control in computer graphics and robotics. 6 Because our physical state evolution is a result of optimization (similar to an implicit integrator), it does not suffer from instabilities or divergence as Euler integration would, and allows the use of larger timesteps (we use ∆t of 50ms in all our experiments). In the current work, the dynamics constraints are enforced softly and thus may include some root forces in simulation. 8 Results This algorithm was applied to learn a policy that allows interactive locomotion for a range of very different three-dimensional characters. We used a single network architecture and parameters to create all controllers without any specialized initializations. While the task is locomotion, different character types exhibit very different behaviors. The experiments include three-dimensional swimming and flying characters as well as biped and quadruped walking tasks. Unlike in two-dimensional scenarios, it is much easier for characters to fall or go into unstable regions, yet our method manages to learn successful controllers. We strongly suggest viewing the supplementary video for examples of resulting behaviors. The swimming creature featured four fins with two degrees of freedom each. It is propelled by lift and drag forces for simulated water density of 1000kg/m3. To move, orient, or maintain position, controller learned to sweep down opposite fins in a cyclical patter, as in treading water. The bird creature was a modification of the swimmer, with opposing two-segment wings and the medium density changed changed to that of air (1.2kg/m3). The learned behavior that emerged is cyclical flapping motion (more vigorous now, because of the lower medium density) as well as utilization of lift forces to coast to distant goal positions and modulation of flapping speed to change altitude. Three bipedal creatures were created to explore the controller’s function with respect to contact forces. Two creatures were akin to a humanoid - one large and one small, both with arms - while the other had a very wide torso compared to its height. All characters learned to walk to the target location and orientation with a regular, cyclic gait. The same algorithm also learned a stereotypical trot gait for a dog-like and spider-like quadrupeds. This alternating left/right footstep cyclic behavior for bipeds or trot gaits for quadrupeds emerged without any user input or hand-crafting. The costs in the trajectory optimization were to reach goal position and orientation while minimizing torque usage and contact force magnitudes. We used the MuJoCo physics simulator [16] engine for our dynamics calculations. The values of the algorithmic constants used in all experiments are σε = 10−2 σγ = 10−2 α = 10 λ = 102 η = 10−2. 8.1 Comparative Evaluation We show the performance of our method on a biped walking task in figure 2 under full method case. To test the contribution of our proposed joint optimization technique, we compared our algorithm to naive neural network training on a static optimal trajectory dataset. We disabled the neural network and generated optimal trajectories as according to 5. Then, we performed our regression on this static data set with no trajectories being re-optimized. The results are shown in no joint case. We see that at test time, our full method performs two orders of magnitude better than static training. To test the contribution of noise injection, we used our full method, but disabled sensory and hidden unit noise (sections 2.1 and 4). The results are under no noise case. We observe typical overfitting, with good training performance, but very poor test performance. In practice, both ablations above lead to policy rollouts that quickly diverge from expected behavior. Additionally, we have compared the performance of different policy network architectures on the biped walking task by varying the number of layers and hidden units. The results are shown in table 1. We see that 3 hidden layers of 250 units gives the best performance/complexity tradeoff. Model-predictive control (MPC) is another potential choice of a real-time controller for task-driven character behavior. In fact, the trajectory costs for both MPC and our method are very similar. The resulting trajectories, however, end up being different: MPC creates effective trajectories that are not cyclical (both are shown in figure 3 for a bird character). This suggests a significant nullspace of task solutions, but from all these solutions, our joint optimization - through the cost terms of matching the neural network output - act to regularize trajectory optimization to predictable and less chaotic behaviors. 7 Figure 2: Performance of our full method and two ablated configurations as training progresses over 10000 neutral network updates. Mean and variance of the error is over 1000 training and test trials. 10 neurons 0.337 ± 0.06 25 neurons 0.309 ± 0.06 100 neurons 0.186 ± 0.02 250 neurons 0.153 ± 0.02 500 neurons 0.148 ± 0.02 (a) Increasing Neurons per layer with 4 layers 1 layer 0.307 ± 0.06 2 layers 0.253 ± 0.06 3 layers 0.153 ± 0.02 4 layers 0.158 ± 0.02 (b) Increasing Layers with 250 neurons per layer Table 1: Mean and variance of joint position error on test rollouts with our method after training with different neural network configurations. 9 Conclusions and Future Work We have presented an automatic way of generating neural network parameters that represent a control policy for physically consistent interactive character control, only requiring a dynamical character model and task description. Using both trajectory optimization and stochastic neural networks together combines correct behavior with real-time interactive use. Furthermore, the same algorithm and controller architecture can provide interactive control for multiple creature morphologies. While the behavior of the characters reflected efficient task completion in this work, additional modifications could be made to affect the style of behavior – costs during trajectory optimization can affect how a task is completed. Incorporation of muscle actuation effects into our character models may result in more biomechanically plausible actions for that (biologically based) character. In addition to changing the character’s physical characteristics, we could explore different neural network architectures and how they compare to biological systems. With this work, we have networks that enable diverse physical action, which could be augmented to further reflect biological sensorimotor systems. This model could be used to experiment with the effects of sensor delays and the resulting motions, for example [2]. This work focused on locomotion of different creatures with the same algorithm. Previous work has demonstrated behaviors such as getting up, climbing, and reaching with the same trajectory optimization method [12]. Real-time policies using this algorithm could allow interactive use of these behaviors as well. Extending beyond character animation, this work could be used to develop controllers for robotics applications that are robust to sensor noise and perturbations if the trained character model accurately reflects the robot’s physical parameters. Figure 3: Typical joint angle trajectories that result from MPC and our method. While both trajectories successfully maintain position for a bird character, our method generates trajectories that are cyclic and regular. 8 References [1] P. Chen. Hessian matrix vs. gauss-newton hessian matrix. SIAM J. Numerical Analysis, 49(4):1417–1435, 2011. [2] H. Geyer and H. Herr. A muscle-reflex model that encodes principles of legged mechanics produces human walking dynamics and muscle activities. Neural Systems and Rehabilitation Engineering, IEEE Transactions on, 18(3):263–273, 2010. [3] R. Grzeszczuk, D. Terzopoulos, and G. Hinton. Neuroanimator: Fast neural network emulation and control of physics-based models. In Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’98, pages 9–20, New York, NY, USA, 1998. ACM. [4] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 2012. [5] G. M. Hoerzer, R. Legenstein, and W. Maass. Emergence of complex computational structures from chaotic neural networks through reward-modulated hebbian learning. Cerebral Cortex, 2012. [6] D. Huh and E. Todorov. Real-time motor control using recurrent neural networks. In Adaptive Dynamic Programming and Reinforcement Learning, 2009. ADPRL ’09. IEEE Symposium on, pages 42–49, March 2009. [7] A. J. Ijspeert. Central pattern generators for locomotion control in animals and robots: a review, 2008. [8] E. Ju, J. Won, J. Lee, B. Choi, J. Noh, and M. G. Choi. Data-driven control of flapping flight. ACM Trans. Graph., 32(5):151:1–151:12, Oct. 2013. [9] S. Levine and V. Koltun. Learning complex neural network policies with trajectory optimization. In ICML ’14: Proceedings of the 31st International Conference on Machine Learning, 2014. [10] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. A. Riedmiller. Playing atari with deep reinforcement learning. CoRR, abs/1312.5602, 2013. [11] I. Mordatch and E. Todorov. Combining the benefits of function approximation and trajectory optimization. In Robotics: Science and Systems (RSS), 2014. [12] I. Mordatch, E. Todorov, and Z. Popovi´c. Discovery of complex behaviors through contactinvariant optimization. ACM Transactions on Graphics (TOG), 31(4):43, 2012. [13] J. R. Rebula, P. D. Neuhaus, B. V. Bonnlander, M. J. Johnson, and J. E. Pratt. A controller for the littledog quadruped walking on rough terrain. In Robotics and Automation, 2007 IEEE International Conference on, pages 1467–1473. IEEE, 2007. [14] J. Schulman, S. Levine, P. Moritz, M. I. Jordan, and P. Abbeel. Trust region policy optimization. CoRR, abs/1502.05477, 2015. [15] I. Sutskever, J. Martens, G. E. Dahl, and G. E. Hinton. On the importance of initialization and momentum in deep learning. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), volume 28, pages 1139–1147, May 2013. [16] E. Todorov, T. Erez, and Y. Tassa. Mujoco: A physics engine for model-based control. In IROS’12, pages 5026–5033, 2012. [17] P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol. Extracting and composing robust features with denoising autoencoders. pages 1096–1103, 2008. [18] M. Vukobratovic and B. Borovac. Zero-moment point - thirty five years of its life. I. J. Humanoid Robotics, 1(1):157–173, 2004. [19] S. Wager, S. Wang, and P. Liang. Dropout training as adaptive regularization. In Advances in Neural Information Processing Systems (NIPS), 2013. [20] J. M. Wang, D. J. Fleet, and A. Hertzmann. Optimizing walking controllers for uncertain inputs and environments. ACM Trans. Graph., 29(4):73:1–73:8, July 2010. [21] K. Yin, K. Loken, and M. van de Panne. Simbicon: Simple biped locomotion control. ACM Trans. Graph., 26(3):Article 105, 2007. 9 | 2015 | 59 |
5,949 | SubmodBoxes: Near-Optimal Search for a Set of Diverse Object Proposals Qing Sun Virginia Tech sunqing@vt.edu Dhruv Batra Virginia Tech https://mlp.ece.vt.edu/ Abstract This paper formulates the search for a set of bounding boxes (as needed in object proposal generation) as a monotone submodular maximization problem over the space of all possible bounding boxes in an image. Since the number of possible bounding boxes in an image is very large O(#pixels2), even a single linear scan to perform the greedy augmentation for submodular maximization is intractable. Thus, we formulate the greedy augmentation step as a Branch-and-Bound scheme. In order to speed up repeated application of B&B, we propose a novel generalization of Minoux’s ‘lazy greedy’ algorithm to the B&B tree. Theoretically, our proposed formulation provides a new understanding to the problem, and contains classic heuristic approaches such as Sliding Window+Non-Maximal Suppression (NMS) and and Efficient Subwindow Search (ESS) as special cases. Empirically, we show that our approach leads to a state-of-art performance on object proposal generation via a novel diversity measure. 1 Introduction A number of problems in Computer Vision and Machine Learning involve searching for a set of bounding boxes or rectangular windows. For instance, in object detection [9,16,17,19,34,36,37], the goal is to output a set of bounding boxes localizing all instances of a particular object category. In object proposal generation [2, 7, 39, 41], the goal is to output a set of candidate bounding boxes that may potentially contain an object (of any category). Other scenarios include face detection, multi-object tracking and weakly supervised learning [10]. Classical Approach: Enumeration + Diverse Subset Selection. In the context of object detection, the classical paradigm for searching for a set of bounding boxes used to be: • Sliding Window [9, 16, 40]: i.e., enumeration over all windows in an image with some level of sub-sampling, followed by • Non-Maximal Suppression (NMS): i.e., picking a spatially-diverse set of windows by suppressing windows that are too close or overlapping. As several previous works [3,26,40] have recognized, the problem with this approach is inefficiency – the number of possible bounding boxes or rectangular subwindows in an image is O(#pixels2). Even a low-resolution (320 x 240) image contains more than one billion rectangular windows [26]! As a result, modern object detection pipelines [17, 19, 36] often rely on object proposals as a preprocessing step to reduce the number of candidate object locations to a few hundreds or thousands (rather than billions). Interestingly, this migration to object proposals has simply pushed the problem (of searching for a set of bounding boxes) upstream. Specifically, a number of object proposal techniques [8, 32, 41] involve the same enumeration + NMS approach – except they typically use cheaper features to be a fast proposal generation step. Goal. The goal of this paper is to formally study the search for a set of bounding boxes as an optimization problem. Clearly, enumeration + post-processing for diversity (via NMS) is one widelyused heuristic approach. Our goal is to formulate a formal optimization objective and propose an efficient algorithm, ideally with guarantees on optimization performance. Challenge. The key challenge is the exponentially-large search space – the number of possible M-sized sets of bounding boxes is O(#pixels2) M = O(#pixels2M) (assuming M ≤#pixels2/2). 1 Figure 1: Overview of our formulation: SubmodBoxes. We formulate the selection of a set of boxes as a constrained submodular maximization problem. The objective and marginal gains consists of two parts: relevance and diversity. Figure (b) shows two candidate windows ya and yb. Relevance is the sum of edge strength over all edge groups (black curves) wholly enclosed in the window. Figure (c) shows the diversity term. The marginal gain in diversity due to a new window (ya or yb) is the ability of the new window to cover the reference boxes that are currently not well-covered with the already chosen set Y = {y1, y2}. In this case, we can see that ya covers a new reference box b1. Thus, the marginal gain in diversity of ya will be larger than yb. Overview of our formulation: SubmodBoxes. Let Y denote the set of all possible bounding boxes or rectangular subwindows in an image. This is a structured output space [4,21,38], with the size of this set growing quadratically with the size of the input image, |Y| = O(#pixels2). We formulate the selection of a set of boxes as a search problem on the power set 2Y. Specifically, given a budget of M windows, we search for a set Y of windows that are both relevant (e.g., have high likelihood of containing an object) and diverse (to cover as many objects instances as possible): argmax Y ∈2Y | {z } search over power-set F(Y ) | {z } objective = R(Y ) | {z } relevance + λ |{z} trade-off parameter D(Y ) | {z } diversity s.t. |Y | ≤M | {z } budget constraint (1) Crucially, when the objective function F : 2Y →R is monotone and submodular, then a simple greedy algorithm (that iteratively adds the window with the largest marginal gain [24]) achieves a near-optimal approximation factor of (1 −1 e) [24,30]. Unfortunately, although conceptually simple, this greedy augmentation step requires an enumeration over the space of all windows Y and thus a naïve implementation is intractable. In this work, we show that for a broad class of relevance and diversity functions, this greedy augmentation step may be efficiently formulated as a Branch-and-Bound (B&B) step [12, 26], with easily computable upper-bounds. This enables an efficient implementation of greedy, with significantly fewer evaluations than a linear scan over Y. Finally, in order to speed up repeated application of B&B across iterations of the greedy algorithm, we present a novel generalization of Minoux’s ‘lazy greedy’ algorithm [29] to the B&B tree, where different branches are explored in a lazy manner in each iteration. We apply our proposed technique SubmodBoxes to the task of generating object proposals [2,7,39, 41] on the PASCAL VOC 2007 [13], PASCAL VOC 2012 [14], and MS COCO [28] datasets. Our results show that our approach outperforms all baselines. Contributions. This paper makes the following contributions: 1. We formulate the search for a set of bounding boxes or subwindows as the constrained maximization of a monotone submodular function. To the best of our knowledge, despite the popularity of object recognition and object proposal generation, this is the first such formal optimization treatment of the problem. 2. Our proposed formulation contains existing heuristics as special cases. Specifically, Sliding Window + NMS can be viewed as an instantiation of our approach under a specific definition of the diversity function D(·). 3. Our work can be viewed as a generalization of the ‘Efficient Subwindow Search (ESS)’ of Lampert et al. [26], who proposed a B&B scheme for finding the single best bounding box in an image. Their extension to detecting multiple objects consisted of a heuristic for ‘suppressing’ features extracted from the selected bounding box and re-running the procedure. We show that this heuristic is a special case of our formulation under a specific diversity function, thus providing theoretical justification to their intuitive heuristic. 4. To the best of our knowledge, our work presents the first generalization of Minoux’s ‘lazy greedy’ algorithm [29] to structured-output spaces (the space of bounding boxes). 2 5. Finally, our experimental contribution is a novel diversity measure which leads to state-ofart performance on the task of generating object proposals. 2 Related Work Our work is related to a few different themes of research in Computer Vision and Machine Learning. Submodular Maximization and Diversity. The task of searching for a diverse high-quality subset of items from a ground set has been well-studied in a number of application domains [6, 11, 22, 25, 27, 31], and across these domains submodularity has emerged as an a fundamental property of set functions for measuring diversity of a subset of items. Most previous work has focussed on submodular maximization on unstructured spaces, where the ground set is efficiently enumerable. Our work is closest in spirit to Prasad et al. [31], who studied submodular maximization on structured output spaces, i.e. where each item in the ground set is itself a structured object (such as a segmentation of an image). Unlike [31], our ground set Y is not exponentially large, only ‘quadratically’ large. However, enumeration over the ground set for the greedy-augmentation step is still infeasible, and thus we use B&B. Such structured output spaces and greedy-augmentation oracles were not explored in [31]. Bounding Box Search in Object Detection and Object Proposals. As we mention in the introduction, the search for a set of bounding boxes via heuristics such as Sliding Window + NMS used to be the dominant paradigm in object recognition [9, 16, 40]. Modern pipelines have shifted that search step to object proposal algorithms [17,19,36]. A comparison and overview of object proposals may be found in [20]. Zitnick et al. [41] generate candidate bounding boxes via Sliding Window + NMS based on an “objectness” score, which is a function of the number of contours wholly enclosed by a bounding box. We use this objectness score as our relevance term, thus making SubmodBoxes directly comparable to NMS. Another closely related work is [18], which presents an ‘active search’ strategy for reranking selective search [39] object proposals based on a contextual cues. Unlike this work, our formulation is not restricted to any pre-selected set of windows. We search over the entire power set 2Y, and may generate any possible set of windows (up to convergence tolerance in B&B). Branch-and-Bound. One key building block of our work is the ‘Efficient Subwindow Search (ESS)’ B&B scheme et al. [26]. ESS was originally proposed for single-instance object detection. Their extension to detecting multiple objects consisted of a heuristic for ‘suppressing’ features extracted from the selected bounding box and re-running the procedure. In this work, we extend and generalize ESS in multiple ways. First, we show that relevance (objectness scores) and diversity functions used in object proposal literature are amenable to upper-bound and thus B&B optimization. We also show that the ‘suppression’ heuristic used by [26] is a special case of our formulation under a specific diversity function, thus providing theoretical justification to their intuitive heuristic. Finally, [3] also proposed the use of B&B for NMS in object detection. Unfortunately, as we explain later in the paper, the NMS objective is submodular but not monotone, and the classical greedy algorithm does not have approximation guarantees in this setting. In contrast, our work presents a general framework for bounding-box subset-selection based on monotone submodular maximization. 3 SubmodBoxes: Formulation and Approach We begin by establishing the notation used in the paper. Preliminaries and Notation. For an input image x, let Yx denote the set of all possible bounding boxes or rectangular subwindows in this image. For simplicity, we drop the explicit dependance on x, and just use Y. Uppercase letters refer to set functions F(·), R(·), D(·), and lowercase letter refer to functions over individual items f(y), r(y). A set function F : 2Y →R is submodular if its marginal gains F(b|S) ≡F(S ∪b) −F(S) are decreasing, i.e. F(b|S) ≥F(b|T) for all sets S ⊆T ⊆Y and items b /∈T. The function F is called monotone if adding an item to a set does not hurt, i.e. F(S) ≤F(T), ∀S ⊆T. Constrained Submodular Maximization. From the classical result of Nemhauser [30], it is known that cardinality constrained maximization of a monotone submodular F can be performed nearoptimally via a greedy algorithm. We start out with an empty set Y 0 = ∅, and iteratively add the next ‘best’ item with the largest marginal gain over the chosen set : Y t = Y t−1 ∪yt, where yt = argmax y∈Y F(y | Y t−1). (2) The score of the final solution Y M is within a factor of (1 −1 e) of the optimal solution. The computational bottleneck is that in each iteration, we must find the item with the largest marginal gain. In our case, |Y| is the space of all rectangular windows in an image, and exhaustive enumeration 3 Figure 2: Priority queue in B&B scheme. Each vertex in the tree represents a set of windows. Blue rectangles denote the largest and the smallest window in the set. Gray region denotes the rectangle set Yv. In each case, the priority queue consists of all leaves in the B&B tree ranked by the upper bound Uv. Left: shows vertex v is split along the right coordinate interval into equal halves: v1 and v2. Middle: The highest priority vertex v1 in Q1 is further split along bottom coordinate into v3 and v4. Right: The highest priority vertex v4 in Q2 is split along right coordinate into v5 and v6. This procedure is repeated until the highest priority vertex in the queue is a single rectangle. is intractable. Instead of exploring subsampling as is done in Sliding Window methods, we will formulate this greedy augmentation step as an optimization problem solved with B&B. Sets vs Lists. For pedagogical reasons, our problem setup is motivated with the language of sets (Y, 2Y) and subsets (Y ). In practice, our work falls under submodular list prediction [11, 33, 35]. The generalization from sets to lists allows reasoning about an ordering of the items chosen and (potentially) repeated entries in the list. Our final solution Y M is an (ordered) list not an (unordered) set. All guarantees of greedy remain the same in this generalization [11,33,35]. 3.1 Parameterization of Y and Branch-and-Bound Search In this subsection, we briefly recap the Efficient Subwindow Search (ESS) of Lampert et al. [26], which is used a key building block in this work. The goal of [26] is to maximize a (potentially non-smooth) objective function over the space of all rectangular windows: maxy∈Y f(y). A rectangular window y ∈Y is parameterized by its top, bottom, left, and right coordinates y = (t, b, l, r). A set of windows is represented by using intervals for each coordinate instead of a single integer, for example [T, B, L, R], where T = [tlow, thigh] is a range. In this parameterization, the set of all possible boxes in an (h×w)-sized image can be written as Y = [[1, h], [1, h], [1, w], [1, w]]. Branch-and-Bound over Y. ESS creates a B&B tree, where each vertex v in the tree is a rectangle set Yv and an associated upper-bound on the objective function achievable in this set, i.e. maxy∈Yv f(y) ≤Uv. Initially, this tree consists of a single vertex, which is the entire search space Y and (typically) a loose upper-bound. ESS proceeds in a best-first manner [26]. In each iteration, the vertex/set with the highest upper-bound is chosen for branching, and then new upper-bounds are computed on each of the two children/sub-sets created. In practice, this is implemented with a priority queue over the vertices/sets that are currently leaves in the tree. Fig. 2 shows an illustration of this procedure. The parent rectangle set is split along its largest coordinate interval into two equal halves, thus forming disjoint children sets. B&B explores the tree in a best-first manner till a single rectangle is identified with a score equal to the upper-bound at which point we have found a global optimum. In our experiments, we show results with different convergence tolerances. Objective. In our setup, the objective (at each greedy-augmentation step) is the marginal gain of the window y w.r.t. the currently chosen list of windows Y t−1, i.e. f(y) = F(y | Y t−1) = R(y | Y t−1)+λD(y | Y t−1). In the following subsections, we describe the relevance and diversity terms in detail, and show how upper bounds can be efficiently computed over the sets of windows. 3.2 Relevance Function and Upper Bound The goal of the relevance function R(Y ) is to quantify the “quality” or “relevance” of the windows chosen in Y . In our work, we define R(Y ) to be a modular function aggregating the quality of all chosen windows i.e. R(Y ) = P y∈Y r(y). Thus, the marginal gain of window y is simply its individual quality regardless of what else has already been chosen, i.e. R(y | Y t−1) = r(y). In our application of object proposal generation, we use the objectness score produced by EdgeBoxes [41] as our relevance function. The main intuition of EdgeBoxes is that the number of contours or “edge groups” wholly contained in a box is indicative of its objectness score. Thus, it first creates a grouping of edge pixels called edge groups, each associated with a real-valued edge strength si. Abstracting away some of the domain-specific details, EdgeBoxes essentially defines the score of a box as a weighted sum of the strengths of edge groups contained in it, normalized by the size of the 4 box i.e. EdgeBoxesScore(y) = P edge group i∈y wisi size-normalization , where with a slight abuse of notation, we use ‘edge group i ∈y’ to mean the edge groups contained the rectangle y. These weights and size normalizations were found to improve performance of EdgeBoxes. In our work, we use a simplification of the EdgeBoxesScore which allow for easy computation of upper bounds: r(y) = P edge group i∈y si size-normalization, (3) i.e., we ignore the weights. One simple upper-bound for a set of windows Yv can be computed by accumulating all possible positive scores and the least necessary negative scores: max y∈Yv r(y) ≤ P edge group i∈ymax si · [[si ≥0]] + P edge group i∈ymin si · [[si ≤0]] size-normalization(ymin) , (4) where ymax is the largest and ymin is the smallest box in the set Yv; and [[·]] is the Iverson bracket. Consistent with the experiments in [41] , we found that this simplification indeed hurts performance in the EdgeBoxes Sliding Window + NMS pipeline. However, interestingly we found that even with this weaker relevance term, SubmodBoxes was able to outperform EdgeBoxes. Thus, the drop in performance due to a weaker relevance term was more than compensated for by the ability to perform B&B jointly on the relevance and diversity terms. 3.3 Diversity Function and Upper Bound The goal of the diversity function D(Y ) is to encourage non-redundancy in the chosen set of windows and potentially capture different objects in the image. Before we introduce our own diversity function, we show how existing heuristics in object detection and proposal generation can be written as special cases of this formulation, under specific diversity functions. Sliding Window + NMS. Non-Maximal Suppression (NMS) is the most popular heuristic for selecting diverse boxes in computer vision. NMS is typically explained procedurally – select the highest scoring window y1, suppress all windows that overlap with y1 by more than some threshold, select the next highest scoring window y2, rinse and repeat. This procedure can be explained as a special case of our formulation. Sliding Window corresponds to enumeration over Y with some level of sub-sampling (or stride), typically with a fixed aspect ratio. Each step in NMS is precisely a greedy augmentation step under the following marginal gain: argmax y∈Ysub-sampled r(y) + λDNMS(y | Y t−1), where (5a) DNMS(y | Y t−1) = 0 if maxy′∈Y t−1 IoU(y′, y) ≤NMS-threshold −∞ else. (5b) Intuitively, the NMS diversity function imposes an infinite penalty if a new window y overlaps with a previously chosen y′ by more than a threshold, and offers no reward for diversity beyond that. This explains the NMS procedure of suppressing overlapping windows and picking the highest scoring one among the unsuppressed ones. Notice that this diversity function is submodular but not monotone (the marginals gains may be negative). A similar observation was made in [3]. For such non-monotone functions, greedy does not have approximation guarantees and different techniques are needed [5,15]. This is an interesting perspective on the classical NMS heuristic. ESS Heuristic [26]. ESS was originally proposed for single-instance object detection. Their extension to detecting multiple instances consisted of a heuristic for suppressing the features extracted from the selected bounding box and re-running the procedure. Since their scoring function was linear in the features, this heuristic of suppressing features and rerunning B&B can be expressed as a greedy augmentation step under the following marginal gain: argmax y∈Y r(y) + λDESS(y | Y t−1), where DESS(y | Y t−1) = −r y ∩(y1 ∪y2 . . . yt−1) (6) i.e., the ESS diversity function subtracts the score contribution coming from the intersection region. If the r(·) is non-negative, it is easy to see that this diversity function is monotone and submodular – adding a new window never hurts, and since the marginal gain is the score contribution of the new regions not covered by previous window, it is naturally diminishing. Thus, even though this heuristic not was presented as such, the authors of [26] did in fact formulate a near-optimal greedy algorithm for maximizing a monotone submodular function. Unfortunately, while r(·) is always positive in our experiments, this was not the case in the experimental setup of [26]. 5 Our Diversity Function. Instead of hand-designing an explicit diversity function, we use a function that implicitly measures diversity in terms of coverage of a set of reference set of bounding boxes B. This reference set of boxes may be a uniform sub-sampling of the space of windows as done in Sliding Window methods, or may itself be the output of another object proposal method such as Selective Search [39]. Specifically, each greedy augmentation step under our formulation given by: argmax y∈Y r(y) + λDcoverage(y | Y t−1), where Dcoverage(y | Y t−1) = max b∈B δIoU(y, b | Y t−1) (7a) δIoU(y, b | Y t−1) = max{IoU(y, b) − max y′∈Y t−1 IoU(y′, b), 0}. (7b) Intuitively speaking, the marginal gain of a new window y under our diversity function is the largest gain in coverage of exactly one of the references boxes. We can also formulate this diversity function as a maximum bipartite matching problem between the reference proposal boxes Y and the reference boxes B (in our experiments, we also study performance under top-k matches). We show in the supplement that this marginal gain is always non-negative and decreasing with larger Y t−1, thus the diversity function is monotone submodular. All that remains is to compute an upper-bound on this marginal gain. Ignoring constants, the key term to bound is IoU(y, b). We can upper-bound this term by computing the intersection w.r.t. the largest window in the window set ymax, and computing the union w.r.t. to the smallest window ymin, i.e. maxy∈Yv IoU(y, b) ≤area(ymax∩b) area(ymin∪b) . 4 Speeding up Greedy with Minoux’s ‘Lazy Greedy’ In order to speed up repeated application of B&B across iterations of the greedy algorithm, we now present an application of Minoux’s ‘lazy greedy’ algorithm [29] to the B&B tree. The key insight of classical lazy greedy is that the marginal gain function F(y | Y t) is a nonincreasing function of t (due to submodularity of F). Thus, at time t −1, we can cache the priority queue of marginals gains F(y | Y t−2) for all items. At time t, lazy greedy does not recompute all marginal gains. Rather, the item at the front of the priority queue is picked, its marginal gain is updated F(y | Y t−1), and the item is reinserted into the queue. Crucially, if the item remains at the front of the priority queue, lazy greedy can stop, and we have found the item with the largest marginal gain. Interleaving Lazy Greedy with B&B. In our work, the priority queue does not contain single items, rather sets of windows Yv corresponding to the vertices in the B&B tree. Thus, we must interleave the lazy updates with the Branch-and-Bound steps. Specifically, we pick a set from the front of the queue and compute the upper-bound on its marginal gain. We reinsert this set into the priority queue. Once a set remains at the front of the priority queue after reinsertion, we have found the set with the highest upper-bound. This is when perform a B&B step, i.e. split this set into two children, compute the upper-bounds on the children, and insert them into the queue. Figure 3: Interleaving Lazy Greedy with B&B. The first few steps update upper-bounds, following by finally branching on a set. Some sets, such as v2 are never updated or split, resulting in a speed-up. Fig. 3 illustrates how the priority queue and B&B tree are updated in this process. Suppose at the end of iteration t −1 and the beginning of iteration t, we have the priority queue shown on the left. The first few updates involve recomputing the upper-bounds on the window sets (v6, v5, v3), following by branching on v3 because it continues to stay on top of the queue, creating new vertices v7, v8. Notice that v2 is never explored (updated or split), resulting in a speed-up. 5 Experiments Setup. We evaluate SubmodBoxes for object proposal generation on three datasets: PASCAL VOC 2007 [13], PASCAL VOC 2012 [14], and MS COCO [28]. The goal of experiments is to validate our approach by testing the accuracy of generated object proposals and the ability of handling different kinds of reference boxes, and observe trends as we vary multiple parameters. 6 0 200 400 600 800 1000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 No. proposals ABO SubmodBoxes SubmodBoxes,λ=∞ EB50 EB70 EB90 EB50_no_aff EB70_no_aff EB90_no_aff SS SS−EB (a) Pascal VOC 2007 0 200 400 600 800 1000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 No. proposals ABO SubmodBoxes SubmodBoxes,λ=∞ EB50 EB70 EB90 EB50_no_aff EB70_no_aff EB90_no_aff SS SS−EB (b) Pascal VOC 2012 0 200 400 600 800 1000 0 0.1 0.2 0.3 0.4 0.5 0.6 No. proposals ABO SubmodBoxes SubmodBoxes,λ=∞ EB50 EB70 EB90 EB50_no_aff EB70_no_aff EB90_no_aff SS SS−EB (c) MS COCO Figure 4: ABO vs. No. Proposals. Evaluation. To evaluate the quality of our object proposals, we use Mean Average Best Overlap (MABO) score. Given a set of ground-truth boxes GTc for a class c, ABO is calculated by averaging the best IoU between each ground truth bounding box and all object proposals: ABOc = 1 |GTc| X g∈GTc max y∈Y IoU(g, y) (8) MABO is a mean ABO over all classes. Weighing the Reference Boxes. Recall that the marginal gain of our proposed diversity function rewards covering the reference boxes with the chosen set of boxes. Instead of weighing all reference boxes equally, we found it important to weigh different reference boxes differently. The exact form the weighting rule is provided in the supplement. In our experiments, we present results with and without such a weighting to show impact of our proposed scheme. 5.1 Accuracy of Object Proposals In this section, we explore the performance of our proposed method in comparison to relevant object proposal generators. For the two PASCAL datasets, we perform cross validation on 2510 validation images of PASCAL VOC 2007 for the best parameter λ, then report accuracies on 4952 test images of PASCAL VOC 2007 and 5823 validation images of PASCAL VOC 2012. The MS COCO dataset is much larger, so we randomly select a subset of 5000 training images for tuning λ, and test on complete validation dataset with 40138 images. We use 1000 top ranked selective search windows [39] as reference boxes. In a manner similar to [23], we chose a different λM for M = 100, 200, 400, 600, 800, 1000 proposals. We compare our approach with several baselines: 1) λ = ∞, which essentially involves re-ranking selective search windows by considering their ability to coverage other boxes. 2) Three variants of EdgeBoxes [41] at IoU = 0.5, 0.7 and 0.9, and corresponding three variants without affinities in (3). 3) Selective Search: compute multiple hierarchical segments via grouping superpixels and placing bounding boxes around them. 4) SS-EB: use EdgeBoxesScore to re-rank Selective Search windows. Fig. 4 shows that our approach at λ = ∞and validation-tuned λ both outperform all baselines. At M = 25, 100, and 500, our approach is 20%, 11%, and 3% better than Selective Search and 14%, 10%, and 6% better than EdgeBoxes70, respectively. 5.2 Ablation Studies. We now study the performance of our system under different components and parameter settings. Effect of λ and Reference Boxes. We test performance of our approach as a function of λ using reference boxes from different object proposal generators (all reported at M=200 on PASCAL VOC 2012). Our reference box generators are: 1) Selective Search [39]; 2) MCG [2]; 3) CPMC [7]; 4) EdgeBoxes [41] at IoU = 0.7; 5) Objectness [1]; and 6) Uniform-sampling [20]: i.e. uniformly sample the bounding box center position, square root area and log aspect ratio. Table 1 shows the performance of SubmodBoxes when used with these different reference box generators. Our approach shows improvement (over corresponding method) for all reference boxes. Our approach outperforms the current state of art MCG by 2% and Selective Search by 5%. This is significantly larger than previous improvements reported in the literature. Fig. 5a shows more fine-grained behavior as λ is varied. At λ = 0 all methods produce the same (highest weighted) box M times. At λ = ∞, they all perform a reranking of the reference set of boxes. In nearly all curves, there is a peak at some intermediate setting of λ. The only exception is EdgeBoxes, which is expected since it is being used in both the relevance and diversity terms. Effect of No. B&B Steps. We analyze the convergence trends of B&B. Fig. 5b shows that both the optimization objective function value and the mABO increase with the number of B&B iterations. 7 Selective-Search MCG EB CPMC Objectness Uniform-sampling λ ≈0.4, weighting 0.7342 0.7377 0.6747 0.7125 0.6131 0.5937 λ ≈0.4, without weighting 0.5697 0.5042 0.6350 0.5681 0.6220 0.5136 λ = 10, weighting 0.7233 0.7417 0.6467 0.7130 0.5006 0.5478 λ = 10, without weighting 0.5844 0.5534 0.6232 0.5849 0.5920 0.5115 λ = ∞, weighting 0.7222 0.7409 0.6558 0.7116 0.4980 0.5453 Original method 0.6817 0.7206 0.6755 0.7032 0.6038 0.5295 Table 1: Comparison with/without weighting scheme (rows) with different reference boxes (columns). ‘Original method’ row shows performance of directly using object proposals from these proposal generators. ‘≈’ means we report the best performance from λ = 0.3, 0.4 and 0.5 considering the peak occurs at different λ for different object proposal generators. 0 0.5 1 1.5 2 0.55 0.6 0.65 0.7 mABO λ SS Objectness EB MCG Uniform CPMC (a) Performance vs. λ with different reference box generators. 1000 2000 5000 10000 250 265 280 295 310 No.Iterations Objective values 1000 2000 5000 10000 0.55 0.6 0.65 0.7 mABO (b) Objective and performance vs. No. of iterations. 0 5 10 15 20 0.69 0.7 0.71 No.Matching boxes mABO (c) Performance vs. No. of matching boxes. Figure 5: Experiments on different parameter settings. Effect of No. of Matching Boxes. Instead of allowing the chosen boxes to cover exactly one reference box, we analyze the effect of matching top-k reference boxes. Fig. 5c shows that the performance decreases monotonically bit as more matches are allowed. 0 50 100 0 1 2 3x 10 7 No.Proposals No.Evaluations Without Lazy Lazy Figure 6: Comparison of the number of B&B iterations of our Lazy Greedy generalization and independent B&B runs. Speed up via Lazy Greedy. Fig. 6 compares the number of B&B iterations required with and without our proposed Lazy Greedy generalization (averaged over 100 randomly chosen images) – we can see that Lazy Greedy significantly reduces the number of B&B iterations required. The cost of each B&B evaluation is nearly the same, so the iteration speed-up is directly proportional to time speed-up. 6 Conclusions To summarize, we formally studied the search for a set of diverse bounding boxes as an optimization problem and provided theoretical justification for greedy and heuristic approaches used in prior work. The key challenge of this problem is the large search space. Thus, we proposed a generalization of Minoux’s ‘lazy greedy’ on B&B tree to speed up classical greedy. We tested our formulation on three datasets of object detection: PASCAL VOC 2007, PASCAL 2012 and Microsoft COCO. Results show that our formulation outperforms all baselines with a novel diversity measure. Acknowledgements. This work was partially supported by a National Science Foundation CAREER award, an Army Research Office YIP award, an Office of Naval Research grant, an AWS in Education Research Grant, and GPU support by NVIDIA. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the U.S. Government or any sponsor. References [1] B. Alexe, T. Deselaers, and V. Ferrari. Measuring the objectness of image windows. PAMI, 34(11):2189– 2202, Nov 2012. 7 [2] P. Arbelaez, J. P. Tuset, J. T.Barron, F. Marques, and J. Malik. Multiscale combinatorial grouping. In CVPR, 2014. 1, 2, 7 [3] M. Blaschko. Branch and bound strategies for non-maximal suppression in object detection. In EMMCVPR, pages 385–398, 2011. 1, 3, 5 [4] M. B. Blaschko and C. H. Lampert. Learning to localize objects with structured output regression. In ECCV, 2008. 2 [5] N. Buchbinder, M. Feldman, J. Naor, and R. Schwartz. A tight (1/2) linear-time approximation to unconstrained submodular maximization. In FOCS, 2012. 5 8 [6] J. Carbonell and J. Goldstein. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’98, pages 335–336, 1998. 3 [7] J. Carreira and C. Sminchisescu. Constrained parametric min-cuts for automatic object segmentation. In CVPR, 2010. 1, 2, 7 [8] M.-M. Cheng, Z. Zhang, W.-Y. Lin, and P. Torr. Bing:binarized normed gradients for objectness estimation at 300fps. In CVPR, 2014. 1 [9] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. 1, 3 [10] T. Deselaers, B. Alexe, and V. Ferrari. Localizing objects while learning their appearance. In ECCV, 2010. 1 [11] D. Dey, T. Liu, M. Hebert, and J. A. Bagnell. Contextual sequence prediction with application to control library optimization. In Robotics Science and Systems (RSS), 2012. 3, 4 [12] E.L.Lawler and D.E.Wood. Branch-and-bound methods: A survey. Operations Research, 14(4):699–719, 1966. 2 [13] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 (VOC2007) Results. http://www.pascalnetwork.org/challenges/VOC/voc2007/workshop/index.html. 2, 6 [14] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascalnetwork.org/challenges/VOC/voc2012/workshop/index.html. 2, 6 [15] U. Feige, V. Mirrokni, and J. Vondrák. Maximizing non-monotone submodular functions. In FOCS, 2007. 5 [16] P. F. Felzenszwalb, R. B. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. PAMI, 32(9):1627–1645, 2010. 1, 3 [17] R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 1, 3 [18] A. Gonzalez-Garcia, A. Vezhnevets, and V. Ferrari. An active search strategy for efficient object detection. In CVPR, 2015. 3 [19] K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV, 2014. 1, 3 [20] J. Hosang, R. Benenson, and B. Schiele. How good are detection proposals, really? In BMVC, 2014. 3, 7 [21] T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural svms. Machine Learning, 77(1):27–59, 2009. 2 [22] D. Kempe, J. Kleinberg, and E. Tardos. Maximizing the spread of influence through a social network. In ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), 2003. 3 [23] P. Krahenbuhl and V. Koltun. Learning to propose objects. In CVPR, 2015. 7 [24] A. Krause and D. Golovin. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems (to appear). Cambridge University Press, 2014. 2 [25] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. J. Mach. Learn. Res., 9:235–284, 2008. 3 [26] C. H. Lampert, M. B. Blaschko, and T. Hofmann. Efficient subwindow search: A branch and bound framework for object localization. TPMAI, 31(12):2129–2142, 2009. 1, 2, 3, 4, 5 [27] H. Lin and J. Bilmes. A class of submodular functions for document summarization. In ACL, 2011. 3 [28] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. 2, 6 [29] M. Minoux. Accelerated greedy algorithms for maximizing submodular set functions. Optimization Techniques, pages 234–243, 1978. 2, 6 [30] G. Nemhauser, L. Wolsey, and M. Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14(1):265–294, 1978. 2, 3 [31] A. Prasad, S. Jegelka, and D. Batra. Submodular meets structured: Finding diverse subsets in exponentially-large structured item sets. In NIPS, 2014. 3 [32] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In NIPS, 2015. 1 [33] S. Ross, J. Zhou, Y. Yue, D. Dey, and J. A. Bagnell. Learning policies for contextual submodular prediction. In ICML, 2013. 4 [34] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. In ICLR, 2014. 1 [35] M. Streeter and D. Golovin. An online algorithm for maximizing submodular functions. In NIPS, 2008. 4 [36] C. Szegedy, S. Reed, and D. Erhan. Scalable, high-quality object detection. In CVPR, 2014. 1, 3 [37] C. Szegedy, A. Toshev, and D. Erhan. Deep neural networks for object detection. In NIPS, 2013. 1 [38] B. Taskar, C. Guestrin, and D. Koller. Max-margin markov networks. In NIPS, 2003. 2 [39] J. Uijlings, K. van de Sande, T. Gevers, and A. Smeulders. Selective search for object recognition. IJCV, 2013. 1, 2, 3, 6, 7 [40] P. Viola and M. J. Jones. Robust real-time face detection. Int. J. Comput. Vision, 57(2):137–154, May 2004. 1, 3 [41] C. Zitnick and P. Dollar. Edge boxes: Locating object proposals from edges. In ECCV, 2014. 1, 2, 3, 4, 5, 7 9 | 2015 | 6 |
5,950 | Inferring Algorithmic Patterns with Stack-Augmented Recurrent Nets Armand Joulin Facebook AI Research 770 Broadway, New York, USA. ajoulin@fb.com Tomas Mikolov Facebook AI Research 770 Broadway, New York, USA. tmikolov@fb.com Abstract Despite the recent achievements in machine learning, we are still very far from achieving real artificial intelligence. In this paper, we discuss the limitations of standard deep learning approaches and show that some of these limitations can be overcome by learning how to grow the complexity of a model in a structured way. Specifically, we study the simplest sequence prediction problems that are beyond the scope of what is learnable with standard recurrent networks, algorithmically generated sequences which can only be learned by models which have the capacity to count and to memorize sequences. We show that some basic algorithms can be learned from sequential data using a recurrent network associated with a trainable memory. 1 Introduction Machine learning aims to find regularities in data to perform various tasks. Historically there have been two major sources of breakthroughs: scaling up the existing approaches to larger datasets, and development of novel approaches [5, 14, 22, 30]. In the recent years, a lot of progress has been made in scaling up learning algorithms, by either using alternative hardware such as GPUs [9] or by taking advantage of large clusters [28]. While improving computational efficiency of the existing methods is important to deploy the models in real world applications [4], it is crucial for the research community to continue exploring novel approaches able to tackle new problems. Recently, deep neural networks have become very successful at various tasks, leading to a shift in the computer vision [21] and speech recognition communities [11]. This breakthrough is commonly attributed to two aspects of deep networks: their similarity to the hierarchical, recurrent structure of the neocortex and the theoretical justification that certain patterns are more efficiently represented by functions employing multiple non-linearities instead of a single one [1, 25]. This paper investigates which patterns are difficult to represent and learn with the current state of the art methods. This would hopefully give us hints about how to design new approaches which will advance machine learning research further. In the past, this approach has lead to crucial breakthrough results: the well-known XOR problem is an example of a trivial classification problem that cannot be solved using linear classifiers, but can be solved with a non-linear one. This popularized the use of non-linear hidden layers [30] and kernels methods [2]. Another well-known example is the parity problem described by Papert and Minsky [25]: it demonstrates that while a single non-linear hidden layer is sufficient to represent any function, it is not guaranteed to represent it efficiently, and in some cases can even require exponentially many more parameters (and thus, also training data) than what is sufficient for a deeper model. This lead to use of architectures that have several layers of non-linearities, currently known as deep learning models. Following this line of work, we study basic patterns which are difficult to represent and learn for standard deep models. In particular, we study learning regularities in sequences of symbols gen1 Sequence generator Example {anbn | n > 0} aabbaaabbbabaaaaabbbbb {anbncn | n > 0} aaabbbcccabcaaaaabbbbbccccc {anbncndn | n > 0} aabbccddaaabbbcccdddabcd {anb2n | n > 0} aabbbbaaabbbbbbabb {anbmcn+m | n, m > 0} aabcccaaabbcccccabcc n ∈[1, k], X →nXn, X →= (k = 2) 12=212122=221211121=12111 Table 1: Examples generated from the algorithms studied in this paper. In bold, the characters which can be predicted deterministically. During training, we do not have access to this information and at test time, we evaluate only on deterministically predictable characters. erated by simple algorithms. Interestingly, we find that these regularities are difficult to learn even for some advanced deep learning methods, such as recurrent networks. We attempt to increase the learning capabilities of recurrent nets by allowing them to learn how to control an infinite structured memory. We explore two basic topologies of the structured memory: pushdown stack, and a list. Our structured memory is defined by constraining part of the recurrent matrix in a recurrent net [24]. We use multiplicative gating mechanisms as learnable controllers over the memory [8, 19] and show that this allows our network to operate as if it was performing simple read and write operations, such as PUSH or POP for a stack. Among recent work with similar motivation, we are aware of the Neural Turing Machine [17] and Memory Networks [33]. However, our work can be considered more as a follow up of the research done in the early nineties, when similar types of memory augmented neural networks were studied [12, 26, 27, 37]. 2 Algorithmic Patterns We focus on sequences generated by simple, short algorithms. The goal is to learn regularities in these sequences by building predictive models. We are mostly interested in discrete patterns related to those that occur in the real world, such as various forms of a long term memory. More precisely, we suppose that during training we have only access to a stream of data which is obtained by concatenating sequences generated by a given algorithm. We do not have access to the boundary of any sequence nor to sequences which are not generated by the algorithm. We denote the regularities in these sequences of symbols as Algorithmic patterns. In this paper, we focus on algorithmic patterns which involve some form of counting and memorization. Examples of these patterns are presented in Table 1. For simplicity, we mostly focus on the unary and binary numeral systems to represent patterns. This allows us to focus on designing a model which can learn these algorithms when the input is given in its simplest form. Some algorithm can be given as context free grammars, however we are interested in the more general case of sequential patterns that have a short description length in some general Turing-complete computational system. Of particular interest are patterns relevant to develop a better language understanding. Finally, this study is limited to patterns whose symbols can be predicted in a single computational step, leaving out algorithms such as sorting or dynamic programming. 3 Related work Some of the algorithmic patterns we study in this paper are closely related to context free and context sensitive grammars which were widely studied in the past. Some works used recurrent networks with hardwired symbolic structures [10, 15, 18]. These networks are continuous implementation of symbolic systems, and can deal with recursive patterns in computational linguistics. While theses approaches are interesting to understand the link between symbolic and sub-symbolic systems such as neural networks, they are often hand designed for each specific grammar. Wiles and Elman [34] show that simple recurrent networks are able to learn sequences of the form anbn and generalize on a limited range of n. While this is a promising result, their model does not 2 truly learn how to count but instead relies mostly on memorization of the patterns seen in the training data. Rodriguez et al. [29] further studied the behavior of this network. Gr¨unwald [18] designs a hardwired second order recurrent network to tackle similar sequences. Christiansen and Chater [7] extended these results to grammars with larger vocabularies. This work shows that this type of architectures can learn complex internal representation of the symbols but it cannot generalize to longer sequences generated by the same algorithm. Beside using simple recurrent networks, other structures have been used to deal with recursive patterns, such as pushdown dynamical automata [31] or sequenctial cascaded networks [3, 27]. Hochreiter and Schmidhuber [19] introduced the Long Short Term Memory network (LSTM) architecture. While this model was orginally developed to address the vanishing and exploding gradient problems, LSTM is also able to learn simple context-free and context-sensitive grammars [16, 36]. This is possible because its hidden units can choose through a multiplicative gating mechanism to be either linear or non-linear. The linear units allow the network to potentially count (one can easily add and subtract constants) and store a finite amount of information for a long period of time. These mechanisms are also used in the Gated Recurrent Unit network [8]. In our work we investigate the use of a similar mechanism in a context where the memory is unbounded and structured. As opposed to previous work, we do not need to “erase” our memory to store a new unit. More recently, Graves et al. [17] have extended LSTM with an attention mechansim to build a model which roughly resembles a Turing machine with limited tape. Their memory controller works with a fixed size memory and it is not clear if its complexity is necessary for the the simple problems they study. Finally, many works have also used external memory modules with a recurrent network, such as stacks [12, 13, 20, 26, 37]. Zheng et al. [37] use a discrete external stack which may be hard to learn on long sequences. Das et al. [12] learn a continuous stack which has some similarities with ours. The mechnisms used in their work is quite different from ours. Their memory cells are associated with weights to allow continuous representation of the stack, in order to train it with continuous optimization scheme. On the other hand, our solution is closer to a standard RNN with special connectivities which simulate a stack with unbounded capacity. We tackle problems which are closely related to the ones addressed in these works and try to go further by exploring more challenging problems such as binary addition. 4 Model 4.1 Simple recurrent network We consider sequential data that comes in the form of discrete tokens, such as characters or words. The goal is to design a model able to predict the next symbol in a stream of data. Our approach is based on a standard model called recurrent neural network (RNN) and popularized by Elman [14]. RNN consists of an input layer, a hidden layer with a recurrent time-delayed connection and an output layer. The recurrent connection allows the propagation of information through time.Given a sequence of tokens, RNN takes as input the one-hot encoding xt of the current token and predicts the probability yt of next symbol. There is a hidden layer with m units which stores additional information about the previous tokens seen in the sequence. More precisely, at each time t, the state of the hidden layer ht is updated based on its previous state ht−1 and the encoding xt of the current token, according to the following equation: ht = σ (Uxt + Rht−1) , (1) where σ(x) = 1/(1+exp(−x)) is the sigmoid activation function applied coordinate wise, U is the d × m token embedding matrix and R is the m × m matrix of recurrent weights. Given the state of these hidden units, the network then outputs the probability vector yt of the next token, according to the following equation: yt = f (V ht) , (2) where f is the softmax function [6] and V is the m × d output matrix, where d is the number of different tokens. This architecture is able to learn relatively complex patterns similar in nature to the ones captured by N-grams. While this has made the RNNs interesting for language modeling [23], they may not have the capacity to learn how algorithmic patterns are generated. In the next section, we show how to add an external memory to RNNs which has the theoretical capability to learn simple algorithmic patterns. 3 (a) (b) Figure 1: (a) Neural network extended with push-down stack and a controlling mechanism that learns what action (among PUSH, POP and NO-OP) to perform. (b) The same model extended with a doubly-linked list with actions INSERT, LEFT, RIGHT and NO-OP. 4.2 Pushdown network In this section, we describe a simple structured memory inspired by pushdown automaton, i.e., an automaton which employs a stack. We train our network to learn how to operate this memory with standard optimization tools. A stack is a type of persistent memory which can be only accessed through its topmost element. Three basic operations can be performed with a stack: POP removes the top element, PUSH adds a new element on top of the stack and NO-OP does nothing. For simplicity, we first consider a simplified version where the model can only choose between a PUSH or a POP at each time step. We suppose that this decision is made by a 2-dimensional variable at which depends on the state of the hidden variable ht: at = f (Aht) , (3) where A is a 2×m matrix (m is the size of the hidden layer) and f is a softmax function. We denote by at[PUSH], the probability of the PUSH action, and by at[POP] the probability of the POP action. We suppose that the stack is stored at time t in a vector st of size p. Note that p could be increased on demand and does not have to be fixed which allows the capacity of the model to grow. The top element is stored at position 0, with value st[0]: st[0] = at[PUSH]σ(Dht) + at[POP]st−1[1], (4) where D is 1 × m matrix. If at[POP] is equal to 1, the top element is replaced by the value below (all values are moved by one position up in the stack structure). If at[PUSH] is equal to 1, we move all values down in the stack and add a value on top of the stack. Similarly, for an element stored at a depth i > 0 in the stack, we have the following update rule: st[i] = at[PUSH]st−1[i −1] + at[POP]st−1[i + 1]. (5) We use the stack to carry information to the hidden layer at the next time step. When the stack is empty, st is set to −1. The hidden layer ht is now updated as: ht = σ Uxt + Rht−1 + Psk t−1 , (6) where P is a m × k recurrent matrix and sk t−1 are the k top-most element of the stack at time t −1. In our experiments, we set k to 2. We call this model Stack RNN, and show it in Figure 1-a without the recurrent matrix R for clarity. Stack with a no-operation. Adding the NO-OP action allows the stack to keep the same value on top by a minor change of the stack update rule. Eq. (4) is replaced by: st[0] = at[PUSH]σ(Dht) + at[POP]st−1[1] + at[NO-OP]st−1[0]. Extension to multiple stacks. Using a single stack has serious limitations, especially considering that at each time step, only one action can be performed. We increase capacity of the model by using multiple stacks in parallel. The stacks can interact through the hidden layer allowing them to process more challenging patterns. 4 method anbn anbncn anbncndn anb2n anbmcn+m RNN 25% 23.3% 13.3% 23.3% 33.3% LSTM 100% 100% 68.3% 75% 100% List RNN 40+5 100% 33.3% 100% 100% 100% Stack RNN 40+10 100% 100% 100% 100% 43.3% Stack RNN 40+10 + rounding 100% 100% 100% 100% 100% Table 2: Comparison with RNN and LSTM on sequences generated by counting algorithms. The sequences seen during training are such that n < 20 (and n + m < 20), and we test on sequences up to n = 60. We report the percent of n for which the model was able to correctly predict the sequences. Performance above 33.3% means it is able to generalize to never seen sequence lengths. Doubly-linked lists. While in this paper we mostly focus on an infinite memory based on stacks, it is straightforward to extend the model to another forms of infinite memory, for example, the doublylinked list. A list is a one dimensional memory where each node is connected to its left and right neighbors. There is a read/write head associated with the list. The head can move between nearby nodes and insert a new node at its current position. More precisely, we consider three different actions: INSERT, which inserts an element at the current position of the head, LEFT, which moves the head to the left, and RIGHT which moves it to the right. Given a list L and a fixed head position HEAD, the updates are: Lt[i] = (at[RIGHT]Lt−1[i + 1] + at[LEFT]Lt−1[i −1] + at[INSERT]σ(Dht) if i = HEAD, at[RIGHT]Lt−1[i + 1] + at[LEFT]Lt−1[i −1] + at[INSERT]Lt−1[i + 1] if i < HEAD, at[RIGHT]Lt−1[i + 1] + at[LEFT]Lt−1[i −1] + at[INSERT]Lt−1[i] if i > HEAD. Note that we can add a NO-OP operation as well. We call this model List RNN, and show it in Figure 1-b without the recurrent matrix R for clarity. Optimization. The models presented above are continuous and can thus be trained with stochastic gradient descent (SGD) method and back-propagation through time [30, 32, 35]. As patterns becomes more complex, more complex memory controller must be learned. In practice, we observe that these more complex controller are harder to learn with SGD. Using several random restarts seems to solve the problem in our case. We have also explored other type of search based procedures as discussed in the supplementary material. Rounding. Continuous operators on stacks introduce small imprecisions leading to numerical issues on very long sequences. While simply discretizing the controllers partially solves this problem, we design a more robust rounding procedure tailored to our model. We slowly makes the controllers converge to discrete values by multiply their weights by a constant which slowly goes to infinity. We finetune the weights of our network as this multiplicative variable increase, leading to a smoother rounding of our network. Finally, we remove unused stacks by exploring models which use only a subset of the stacks. While brute-force would be exponential in the number of stacks, we can do it efficiently by building a tree of removable stacks and exploring it with deep first search. 5 Experiments and results First, we consider various sequences generated by simple algorithms, where the goal is to learn their generation rule [3, 12, 29]. We hope to understand the scope of algorithmic patterns each model can capture. We also evaluate the models on a standard language modeling dataset, Penn Treebank. Implementation details. Stack and List RNNs are trained with SGD and backpropagation through time with 50 steps [32], a hard clipping of 15 to prevent gradient explosions [23], and an initial learning rate of 0.1. The learning rate is divided by 2 each time the entropy on the validation set is not decreasing. The depth k defined in Eq. (6) is set to 2. The free parameters are the number of hidden units, stacks and the use of NO-OP. The baselines are RNNs with 40, 100 and 500 units, and LSTMs with 1 and 2 layers with 50, 100 and 200 units. The hyper-parameters of the baselines are selected on the validation sets. 5.1 Learning simple algorithmic patterns Given an algorithm with short description length, we generate sequences and concatenate them into longer sequences. This is an unsupervised task, since the boundaries of each generated sequences 5 current next prediction proba(next) action stack1[top] stack2[top] b a a 0.99 POP POP -1 0.53 a a a 0.99 PUSH POP 0.01 0.97 a a a 0.95 PUSH PUSH 0.18 0.99 a a a 0.93 PUSH PUSH 0.32 0.98 a a a 0.91 PUSH PUSH 0.40 0.97 a a a 0.90 PUSH PUSH 0.46 0.97 a b a 0.10 PUSH PUSH 0.52 0.97 b b b 0.99 PUSH PUSH 0.57 0.97 b b b 1.00 POP PUSH 0.52 0.56 b b b 1.00 POP PUSH 0.46 0.01 b b b 1.00 POP PUSH 0.40 0.00 b b b 1.00 POP PUSH 0.32 0.00 b b b 1.00 POP PUSH 0.18 0.00 b b b 0.99 POP PUSH 0.01 0.00 b b b 0.99 POP POP -1 0.00 b b b 0.99 POP POP -1 0.00 b b b 0.99 POP POP -1 0.00 b b b 0.99 POP POP -1 0.01 b a a 0.99 POP POP -1 0.56 Table 3: Example of the Stack RNN with 20 hidden units and 2 stacks on a sequence anb2n with n = 6. −1 means that the stack is empty. The depth k is set to 1 for clarity. We see that the first stack pushes an element every time it sees a and pop when it sees b. The second stack pushes when it sees a. When it sees b , it pushes if the first stack is not empty and pop otherwise. This shows how the two stacks interact to correctly predict the deterministic part of the sequence (shown in bold). Memorization Binary addition Figure 2: Comparison of RNN, LSTM, List RNN and Stack RNN on memorization and the performance of Stack RNN on binary addition. The accuracy is in the proportion of correctly predicted sequences generated with a given n. We use 100 hidden units and 10 stacks. are not known. We study patterns related to counting and memorization as shown in Table 1. To evaluate if a model has the capacity to understand the generation rule used to produce the sequences, it is tested on sequences it has not seen during training. Our experimental setting is the following: the training and validation set are composed of sequences generated with n up to N < 20 while the test set is composed of sequences generated with n up to 60. During training, we incrementally increase the parameter n every few epochs until it reaches some N. At test time, we measure the performance by counting the number of correctly predicted sequences. A sequence is considered as correctly predicted if we correctly predict its deterministic part, shown in bold in Table 1. On these toy examples, the recurrent matrix R defined in Eq. (1) is set to 0 to isolate the mechanisms that Stack and list can capture. Counting. Results on patterns generated by “counting” algorithms are shown in Table 2. We report the percentage of sequence lengths for which a method is able to correctly predict sequences of that length. List RNN and Stack RNN have 40 hidden units and either 5 lists or 10 stacks. For these tasks, the NO-OP operation is not used. Table 2 shows that RNNs are unable to generalize to longer sequences, and they only correctly predict sequences seen during training. LSTM is able to generalize to longer sequences which shows that it is able to count since the hidden units in an LSTM can be linear [16]. With a finer hyper-parameter search, the LSTM should be able to achieve 100% 6 on all of these tasks. Despite the absence of linear units, these models are also able to generalize. For anbmcn+m, rounding is required to obtain the best performance. Table 3 show an example of actions done by a Stack RNN with two stacks on a sequence of the form anb2n. For clarity, we show a sequence generated with n equal to 6, and we use discretization. Stack RNN pushes an element on both stacks when it sees a. The first stack pops elements when the input is b and the second stack starts popping only when the first one is empty. Note that the second stack pushes a special value to keep track of the sequence length, i.e. 0.56. Memorization. Figure 2 shows results on memorization for a dictionary with two elements. Stack RNN has 100 units and 10 stacks, and List RNN has 10 lists. We use random restarts and we repeat this process multiple times. Stack RNN and List RNN are able to learn memorization, while RNN and LSTM do not seem to generalize. In practice, List RNN is more unstable than Stack RNN and overfits on the training set more frequently. This unstability may be explained by the higher number of actions the controler can choose from (4 versus 3). For this reason, we focus on Stack RNN in the rest of the experiments. Figure 3: An example of a learned Stack RNN that performs binary addition. The last column is our interpretation of the functionality learned by the different stacks. The color code is: green means PUSH, red means POP and grey means actions equivalent to NO-OP. We show the current (discretized) value on the top of the each stack at each given time. The sequence is read from left to right, one character at a time. In bold is the part of the sequence which has to be predicted. Note that the result is written in reverse. Binary addition. Given a sequence representing a binary addition, e.g., “101+1=”, the goal is to predict the result, e.g., “110.” where “.” represents the end of the sequence. As opposed to the previous tasks, this task is supervised, i.e., the location of the deterministic tokens is provided. The result of the addition is asked in the reverse order, e.g., “011.” in the previous example. As previously, we train on short sequences and test on longer ones. The length of the two input numbers is chosen such that the sum of their lengths is equal to n (less than 20 during training and up to 60 at test time). Their most significant digit is always set to 1. Stack RNN has 100 hidden units with 10 stacks. The right panel of Figure 2 shows the results averaged over multiple runs (with random restarts). While Stack RNNs are generalizing to longer numbers, it overfits for some runs on the validation set, leading to a larger error bar than in the previous experiments. Figure 3 shows an example of a model which generalizes to long sequences of binary addition. This example illustrates the moderately complex behavior that the Stack RNN learns to solve this task: the first stack keeps track of where we are in the sequence, i.e., either reading the first number, reading the second number or writing the result. Stack 6 keeps in memory the first number. Interestingly, the first number is first captured by the stacks 3 and 5 and then copied to stack 6. The second number is stored on stack 3, while its length is captured on stack 4 (by pushing a one and then a set of zeros). When producing the result, the values stored on these three stacks are popped. Finally stack 5 takes 7 care of the carry: it switches between two states (0 or 1) which explicitly say if there is a carry over or not. While this use of stacks is not optimal in the sense of minimal description length, it is able to generalize to sequences never seen before. 5.2 Language modeling. Model Ngram Ngram + Cache RNN LSTM SRCN [24] Stack RNN Validation perplexity 137 120 120 124 Test perplexity 141 125 129 115 115 118 Table 4: Comparison of RNN, LSTM, SRCN [24] and Stack RNN on Penn Treebank Corpus. We use the recurrent matrix R in Stack RNN as well as 100 hidden units and 60 stacks. We compare Stack RNN with RNN, LSTM and SRCN [24] on the standard language modeling dataset Penn Treebank Corpus. SRCN is a standard RNN with additional self-connected linear units which capture long term dependencies similar to bag of words. The models have only one hidden layer with 100 hidden units. Table 4 shows that Stack RNN performs better than RNN with a comparable number of parameters, but not as well as LSTM and SRCN. Empirically, we observe that Stack RNN learns to store exponentially decaying bag of words similar in nature to the memory of SRCN. 6 Discussion and future work Continuous versus discrete model and search. Certain simple algorithmic patterns can be efficiently learned using a continuous optimization approach (stochastic gradient descent) applied to a continuous model representation (in our case RNN). Note that Stack RNN works better than prior work based on RNN from the nineties [12, 34, 37]. It seems also simpler than many other approaches designed for these tasks [3, 17, 31]. However, it is not clear if a continuous representation is completely appropriate for learning algorithmic patterns. It may be more natural to attempt to solve these problems with a discrete model. This motivates us to try to combine continuous and discrete optimization. It is possible that the future of learning of algorithmic patterns will involve such combination of discrete and continuous optimization. Long-term memory. While in theory using multiple stacks for representing memory is as powerful as a Turing complete computational system, intricate interactions between stacks need to be learned to capture more complex algorithmic patterns. Stack RNN also requires the input and output sequences to be in the right format (e.g., memorization is in reversed order). It would be interesting to consider in the future other forms of memory which may be more flexible, as well as additional mechanisms which allow to perform multiple steps with the memory, such as loop or random access. Finally, complex algorithmic patterns can be more easily learned by composing simpler algorithms. Designing a model which possesses a mechanism to compose algorithms automatically and training it on incrementally harder tasks is a very important research direction. 7 Conclusion We have shown that certain difficult pattern recognition problems can be solved by augmenting a recurrent network with structured, growing (potentially unlimited) memory. We studied very simple memory structures such as a stack and a list, but, the same approach can be used to learn how to operate more complex ones (for example a multi-dimensional tape). While currently the topology of the long term memory is fixed, we think that it should be learned from the data as well. Acknowledgment. We would like to thank Arthur Szlam, Keith Adams, Jason Weston, Yann LeCun and the rest of the Facebook AI Research team for their useful comments. References [1] Y. Bengio and Y. LeCun. Scaling learning algorithms towards ai. Large-scale kernel machines, 2007. [2] C. M. Bishop. Pattern recognition and machine learning. springer New York, 2006. [3] M. Bod´en and J. Wiles. Context-free and context-sensitive dynamics in recurrent neural networks. Connection Science, 2000. The code is available at https://github.com/facebook/Stack-RNN 8 [4] L. Bottou. Large-scale machine learning with stochastic gradient descent. In COMPSTAT. Springer, 2010. [5] L. Breiman. Random forests. Machine learning, 45(1):5–32, 2001. [6] J. S. Bridle. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neurocomputing, pages 227–236. Springer, 1990. [7] M. H. Christiansen and N. Chater. Toward a connectionist model of recursion in human linguistic performance. Cognitive Science, 23(2):157–205, 1999. [8] J. Chung, C. Gulcehre, K Cho, and Y. Bengio. Gated feedback recurrent neural networks. arXiv, 2015. [9] D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber. High-performance neural networks for visual object classification. arXiv preprint, 2011. [10] M. W. Crocker. Mechanisms for sentence processing. University of Edinburgh, 1996. [11] G. E. Dahl, D. Yu, L. Deng, and A. Acero. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. Audio, Speech, and Language Processing, 20(1):30–42, 2012. [12] S. Das, C. Giles, and G. Sun. Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory. In ACCSS, 1992. [13] S. Das, C. Giles, and G. Sun. Using prior knowledge in a nnpda to learn context-free languages. NIPS, 1993. [14] J. L. Elman. Finding structure in time. Cognitive science, 14(2):179–211, 1990. [15] M. Fanty. Context-free parsing in connectionist networks. Parallel natural language processing, 1994. [16] F. A. Gers and J. Schmidhuber. Lstm recurrent networks learn simple context-free and context-sensitive languages. Transactions on Neural Networks, 12(6):1333–1340, 2001. [17] A. Graves, G. Wayne, and I. Danihelka. Neural turing machines. arXiv preprint, 2014. [18] P. Gr¨unwald. A recurrent network that performs a context-sensitive prediction task. In ACCSS, 1996. [19] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [20] S. Holldobler, Y. Kalinke, and H. Lehmann. Designing a counter: Another case study of dynamics and activation landscapes in recurrent networks. In Advances in Artificial Intelligence, 1997. [21] A. Krizhevsky, I. Sutskever, and G. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. [22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. 1998. [23] T. Mikolov. Statistical language models based on neural networks. PhD thesis, Brno University of Technology, 2012. [24] T. Mikolov, A. Joulin, S. Chopra, M. Mathieu, and M. A. Ranzato. Learning longer memory in recurrent neural networks. arXiv preprint, 2014. [25] M. Minsky and S. Papert. Perceptrons. MIT press, 1969. [26] M. C. Mozer and S. Das. A connectionist symbol manipulator that discovers the structure of context-free languages. NIPS, 1993. [27] J. B. Pollack. The induction of dynamical recognizers. Machine Learning, 7(2-3):227–252, 1991. [28] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, 2011. [29] P. Rodriguez, J. Wiles, and J. L. Elman. A recurrent neural network that learns to count. Connection Science, 1999. [30] D. E Rumelhart, G. Hinton, and R. J. Williams. Learning internal representations by error propagation. Technical report, DTIC Document, 1985. [31] W. Tabor. Fractal encoding of context-free grammars in connectionist networks. Expert Systems, 2000. [32] P. Werbos. Generalization of backpropagation with application to a recurrent gas market model. Neural Networks, 1(4):339–356, 1988. [33] J. Weston, S. Chopra, and A. Bordes. Memory networks. In ICLR, 2015. [34] J. Wiles and J. Elman. Learning to count without a counter: A case study of dynamics and activation landscapes in recurrent networks. In ACCSS, 1995. [35] R. J. Williams and D. Zipser. Gradient-based learning algorithms for recurrent networks and their computational complexity. Back-propagation: Theory, architectures and applications, pages 433–486, 1995. [36] W. Zaremba and I. Sutskever. Learning to execute. arXiv preprint, 2014. [37] Z. Zeng, R. M. Goodman, and P. Smyth. Discrete recurrent neural networks for grammatical inference. Transactions on Neural Networks, 5(2):320–330, 1994. 9 | 2015 | 60 |
5,951 | Grammar as a Foreign Language Oriol Vinyals∗ Google vinyals@google.com Lukasz Kaiser∗ Google lukaszkaiser@google.com Terry Koo Google terrykoo@google.com Slav Petrov Google slav@google.com Ilya Sutskever Google ilyasu@google.com Geoffrey Hinton Google geoffhinton@google.com Abstract Syntactic constituency parsing is a fundamental problem in natural language processing and has been the subject of intensive research and engineering for decades. As a result, the most accurate parsers are domain specific, complex, and inefficient. In this paper we show that the domain agnostic attention-enhanced sequence-to-sequence model achieves state-of-the-art results on the most widely used syntactic constituency parsing dataset, when trained on a large synthetic corpus that was annotated using existing parsers. It also matches the performance of standard parsers when trained only on a small human-annotated dataset, which shows that this model is highly data-efficient, in contrast to sequence-to-sequence models without the attention mechanism. Our parser is also fast, processing over a hundred sentences per second with an unoptimized CPU implementation. 1 Introduction Syntactic constituency parsing is a fundamental problem in linguistics and natural language processing that has a wide range of applications. This problem has been the subject of intense research for decades, and as a result, there exist highly accurate domain-specific parsers. The computational requirements of traditional parsers are cubic in sentence length, and while linear-time shift-reduce constituency parsers improved in accuracy in recent years, they never matched state-of-the-art. Furthermore, standard parsers have been designed with parsing in mind; the concept of a parse tree is deeply ingrained into these systems, which makes these methods inapplicable to other problems. Recently, Sutskever et al. [1] introduced a neural network model for solving the general sequenceto-sequence problem, and Bahdanau et al. [2] proposed a related model with an attention mechanism that makes it capable of handling long sequences well. Both models achieve state-of-the-art results on large scale machine translation tasks (e.g., [3, 4]). Syntactic constituency parsing can be formulated as a sequence-to-sequence problem if we linearize the parse tree (cf. Figure 2), so we can apply these models to parsing as well. Our early experiments focused on the sequence-to-sequence model of Sutskever et al. [1]. We found this model to work poorly when we trained it on standard human-annotated parsing datasets (1M tokens), so we constructed an artificial dataset by labelling a large corpus with the BerkeleyParser. ∗Equal contribution 1 . Go LSTM1 in LSTM2 in LSTM3 in END (S (VP XX )VP . )S LSTM1 out LSTM2 out LSTM3 out (S (VP XX )VP . )S END Figure 1: A schematic outline of a run of our LSTM+A model on the sentence “Go.”. See text for details. To our surprise, the sequence-to-sequence model matched the BerkeleyParser that produced the annotation, having achieved an F1 score of 90.5 on the test set (section 23 of the WSJ). We suspected that the attention model of Bahdanau et al. [2] might be more data efficient and we found that it is indeed the case. We trained a sequence-to-sequence model with attention on the small human-annotated parsing dataset and were able to achieve an F1 score of 88.3 on section 23 of the WSJ without the use of an ensemble and 90.5 with an ensemble, which matches the performance of the BerkeleyParser (90.4) when trained on the same data. Finally, we constructed a second artificial dataset consisting of only high-confidence parse trees, as measured by the agreement of two parsers. We trained a sequence-to-sequence model with attention on this data and achieved an F1 score of 92.1 on section 23 of the WSJ, matching the state-of-the-art. This result did not require an ensemble, and as a result, the parser is also very fast. 2 LSTM+A Parsing Model Let us first recall the sequence-to-sequence LSTM model. The Long Short-Term Memory model of [5] is defined as follows. Let xt, ht, and mt be the input, control state, and memory state at timestep t. Given a sequence of inputs (x1, . . . , xT ), the LSTM computes the h-sequence (h1, . . . , hT ) and the m-sequence (m1, . . . , mT ) as follows. it = sigm(W1xt + W2ht−1) i′ t = tanh(W3xt + W4ht−1) ft = sigm(W5xt + W6ht−1) ot = sigm(W7xt + W8ht−1) mt = mt−1 ⊙ft + it ⊙i′ t ht = mt ⊙ot The operator ⊙denotes element-wise multiplication, the matrices W1, . . . , W8 and the vector h0 are the parameters of the model, and all the nonlinearities are computed element-wise. In a deep LSTM, each subsequent layer uses the h-sequence of the previous layer for its input sequence x. The deep LSTM defines a distribution over output sequences given an input sequence: P(B|A) = TB Y t=1 P(Bt|A1, . . . , ATA, B1, . . . , Bt−1) ≡ TB Y t=1 softmax(Wo · hTA+t)⊤δBt, The above equation assumes a deep LSTM whose input sequence is x = (A1, . . . , ATA, B1, . . . , BTB), so ht denotes t-th element of the h-sequence of topmost LSTM. The matrix Wo consists of the vector representations of each output symbol and the symbol δb 2 John has a dog . → S NP NNP VP VBZ NP DT NN . John has a dog . → (S (NP NNP )NP (VP VBZ (NP DT NN )NP )VP . )S Figure 2: Example parsing task and its linearization. is a Kronecker delta with a dimension for each output symbol, so softmax(Wo · hTA+t)⊤δBt is precisely the Bt’th element of the distribution defined by the softmax. Every output sequence terminates with a special end-of-sequence token which is necessary in order to define a distribution over sequences of variable lengths. We use two different sets of LSTM parameters, one for the input sequence and one for the output sequence, as shown in Figure 1. Stochastic gradient descent is used to maximize the training objective which is the average over the training set of the log probability of the correct output sequence given the input sequence. 2.1 Attention Mechanism An important extension of the sequence-to-sequence model is by adding an attention mechanism. We adapted the attention model from [2] which, to produce each output symbol Bt, uses an attention mechanism over the encoder LSTM states. Similar to our sequence-to-sequence model described in the previous section, we use two separate LSTMs (one to encode the sequence of input words Ai, and another one to produce or decode the output symbols Bi). Recall that the encoder hidden states are denoted (h1, . . . , hTA) and we denote the hidden states of the decoder by (d1, . . . , dTB) := (hTA+1, . . . , hTA+TB). To compute the attention vector at each output time t over the input words (1, . . . , TA) we define: ut i = vT tanh(W ′ 1hi + W ′ 2dt) at i = softmax(ut i) d′ t = TA X i=1 at ihi The vector v and matrices W ′ 1, W ′ 2 are learnable parameters of the model. The vector ut has length TA and its i-th item contains a score of how much attention should be put on the i-th hidden encoder state hi. These scores are normalized by softmax to create the attention mask at over encoder hidden states. In all our experiments, we use the same hidden dimensionality (256) at the encoder and the decoder, so v is a vector and W ′ 1 and W ′ 2 are square matrices. Lastly, we concatenate d′ t with dt, which becomes the new hidden state from which we make predictions, and which is fed to the next time step in our recurrent model. In Section 4 we provide an analysis of what the attention mechanism learned, and we visualize the normalized attention vector at for all t in Figure 4. 2.2 Linearizing Parsing Trees To apply the model described above to parsing, we need to design an invertible way of converting the parse tree into a sequence (linearization). We do this in a very simple way following a depth-first traversal order, as depicted in Figure 2. We use the above model for parsing in the following way. First, the network consumes the sentence in a left-to-right sweep, creating vectors in memory. Then, it outputs the linearized parse tree using information in these vectors. As described below, we use 3 LSTM layers, reverse the input sentence and normalize part-of-speech tags. An example run of our LSTM+A model on the sentence “Go.” is depicted in Figure 1 (top gray edges illustrate attention). 3 2.3 Parameters and Initialization Sizes. In our experiments we used a model with 3 LSTM layers and 256 units in each layer, which we call LSTM+A. Our input vocabulary size was 90K and we output 128 symbols. Dropout. Training on a small dataset we additionally used 2 dropout layers, one between LSTM1 and LSTM2, and one between LSTM2 and LSTM3. We call this model LSTM+A+D. POS-tag normalization. Since part-of-speech (POS) tags are not evaluated in the syntactic parsing F1 score, we replaced all of them by “XX” in the training data. This improved our F1 score by about 1 point, which is surprising: For standard parsers, including POS tags in training data helps significantly. All experiments reported below are performed with normalized POS tags. Input reversing. We also found it useful to reverse the input sentences but not their parse trees, similarly to [1]. Not reversing the input had a small negative impact on the F1 score on our development set (about 0.2 absolute). All experiments reported below are performed with input reversing. Pre-training word vectors. The embedding layer for our 90K vocabulary can be initialized randomly or using pre-trained word-vector embeddings. We pre-trained skip-gram embeddings of size 512 using word2vec [6] on a 10B-word corpus. These embeddings were used to initialize our network but not fixed, they were later modified during training. We discuss the impact of pre-training in the experimental section. We do not apply any other special preprocessing to the data. In particular, we do not binarize the parse trees or handle unaries in any specific way. We also treat unknown words in a naive way: we map all words beyond our 90K vocabulary to a single UNK token. This potentially underestimates our final results, but keeps our framework task-independent. 3 Experiments 3.1 Training Data We trained the model described above on 2 different datasets. For one, we trained on the standard WSJ training dataset. This is a very small training set by neural network standards, as it contains only 40K sentences (compared to 60K examples even in MNIST). Still, even training on this set, we managed to get results that match those obtained by domain-specific parsers. To match state-of-the-art, we created another, larger training set of ∼11M parsed sentences (250M tokens). First, we collected all publicly available treebanks. We used the OntoNotes corpus version 5 [7], the English Web Treebank [8] and the updated and corrected Question Treebank [9].1 Note that the popular Wall Street Journal section of the Penn Treebank [10] is part of the OntoNotes corpus. In total, these corpora give us ∼90K training sentences (we held out certain sections for evaluation, as described below). In addition to this gold standard data, we use a corpus parsed with existing parsers using the “tri-training” approach of [11]. In this approach, two parsers, our reimplementation of BerkeleyParser [12] and a reimplementation of ZPar [13], are used to process unlabeled sentences sampled from news appearing on the web. We select only sentences for which both parsers produced the same parse tree and re-sample to match the distribution of sentence lengths of the WSJ training corpus. Re-sampling is useful because parsers agree much more often on short sentences. We call the set of ∼11 million sentences selected in this way, together with the ∼90K golden sentences described above, the high-confidence corpus. After creating this corpus, we made sure that no sentence from the development or test set appears in the corpus, also after replacing rare words with “unknown” tokens. This operation guarantees that we never see any test sentence during training, but it also lowers our F1 score by about 0.5 points. We are not sure if such strict de-duplication was performed in previous works, but even with this, we still match state-of-the-art. 1All treebanks are available through the Linguistic Data Consortium (LDC): OntoNotes (LDC2013T19), English Web Treebank (LDC2012T13) and Question Treebank (LDC2012R121). 4 Parser Training Set WSJ 22 WSJ 23 baseline LSTM+D WSJ only < 70 < 70 LSTM+A+D WSJ only 88.7 88.3 LSTM+A+D ensemble WSJ only 90.7 90.5 baseline LSTM BerkeleyParser corpus 91.0 90.5 LSTM+A high-confidence corpus 92.8 92.1 Petrov et al. (2006) [12] WSJ only 91.1 90.4 Zhu et al. (2013) [13] WSJ only N/A 90.4 Petrov et al. (2010) ensemble [14] WSJ only 92.5 91.8 Zhu et al. (2013) [13] semi-supervised N/A 91.3 Huang & Harper (2009) [15] semi-supervised N/A 91.3 McClosky et al. (2006) [16] semi-supervised 92.4 92.1 Table 1: F1 scores of various parsers on the development and test set. See text for discussion. In earlier experiments, we only used one parser, our reimplementation of BerkeleyParser, to create a corpus of parsed sentences. In that case we just parsed ∼7 million senteces from news appearing on the web and combined these parsed sentences with the ∼90K golden corpus described above. We call this the BerkeleyParser corpus. 3.2 Evaluation We use the standard EVALB tool2 for evaluation and report F1 scores on our developments set (section 22 of the Penn Treebank) and the final test set (section 23) in Table 1. First, let us remark that our training setup differs from those reported in previous works. To the best of our knowledge, no standard parsers have ever been trained on datasets numbering in the hundreds of millions of tokens, and it would be hard to do due to efficiency problems. We therefore cite the semi-supervised results, which are analogous in spirit but use less data. Table 1 shows performance of our models on the top and results from other papers at the bottom. We compare to variants of the BerkeleyParser that use self-training on unlabeled data [15], or built an ensemble of multiple parsers [14], or combine both techniques. We also include the best linear-time parser in the literature, the transition-based parser of [13]. It can be seen that, when training on WSJ only, a baseline LSTM does not achieve any reasonable score, even with dropout and early stopping. But a single attention model gets to 88.3 and an ensemble of 5 LSTM+A+D models achieves 90.5 matching a single-model BerkeleyParser on WSJ 23. When trained on the large high-confidence corpus, a single LSTM+A model achieves 92.1 and so matches the best previous single model result. Generating well-formed trees. The LSTM+A model trained on WSJ dataset only produced malformed trees for 25 of the 1700 sentences in our development set (1.5% of all cases), and the model trained on full high-confidence dataset did this for 14 sentences (0.8%). In these few cases where LSTM+A outputs a malformed tree, we simply add brackets to either the beginning or the end of the tree in order to make it balanced. It is worth noting that all 14 cases where LSTM+A produced unbalanced trees were sentences or sentence fragments that did not end with proper punctuation. There were very few such sentences in the training data, so it is not a surprise that our model cannot deal with them very well. Score by sentence length. An important concern with the sequence-to-sequence LSTM was that it may not be able to handle long sentences well. We determine the extent of this problem by partitioning the development set by length, and evaluating BerkeleyParser, a baseline LSTM model without attention, and LSTM+A on sentences of each length. The results, presented in Figure 3, are surprising. The difference between the F1 score on sentences of length upto 30 and that upto 70 is 1.3 for the BerkeleyParser, 1.7 for the baseline LSTM, and 0.7 for LSTM+A. So already the baseline LSTM has similar performance to the BerkeleyParser, it degrades with length only slightly. Surprisingly, LSTM+A shows less degradation with length than BerkeleyParser – a full O(n3) chart parser that uses a lot more memory. 2http://nlp.cs.nyu.edu/evalb/ 5 10 20 30 40 50 60 70 90 91 92 93 94 95 96 Sentence length F1 score BerkeleyParser baseline LSTM LSTM+A Figure 3: Effect of sentence length on the F1 score on WSJ 22. Beam size influence. Our decoder uses a beam of a fixed size to calculate the output sequence of labels. We experimented with different settings for the beam size. It turns out that it is almost irrelevant. We report report results that use beam size 10, but using beam size 2 only lowers the F1 score of LSTM+A on the development set by 0.2, and using beam size 1 lowers it by 0.5. Beam sizes above 10 do not give any additional improvements. Dropout influence. We only used dropout when training on the small WSJ dataset and its influence was significant. A single LSTM+A model only achieved an F1 score of 86.5 on our development set, that is over 2 points lower than the 88.7 of a LSTM+A+D model. Pre-training influence. As described in the previous section, we initialized the word-vector embedding with pre-trained word vectors obtained from word2vec. To test the influence of this initialization, we trained a LSTM+A model on the high-confidence corpus, and a LSTM+A+D model on the WSJ corpus, starting with randomly initialized word-vector embeddings. The F1 score on our development set was 0.4 lower for the LSTM+A model and 0.3 lower for the LSTM+A+D model (88.4 vs 88.7). So the effect of pre-training is consistent but small. Performance on other datasets. The WSJ evaluation set has been in use for 20 years and is commonly used to compare syntactic parsers. But it is not representative for text encountered on the web [8]. Even though our model was trained on a news corpus, we wanted to check how well it generalizes to other forms of text. To this end, we evaluated it on two additional datasets: QTB 1000 held-out sentences from the Question Treebank [9]; WEB the first half of each domain from the English Web Treebank [8] (8310 sentences). LSTM+A trained on the high-confidence corpus (which only includes text from news) achieved an F1 score of 95.7 on QTB and 84.6 on WEB. Our score on WEB is higher both than the best score reported in [8] (83.5) and the best score we achieved with an in-house reimplementation of BerkeleyParser trained on human-annotated data (84.4). We managed to achieve a slightly higher score (84.8) with the in-house BerkeleyParser trained on a large corpus. On QTB, the 95.7 score of LSTM+A is also lower than the best score of our in-house BerkeleyParser (96.2). Still, taking into account that there were only few questions in the training data, these scores show that LSTM+A managed to generalize well beyond the news language it was trained on. Parsing speed. Our LSTM+A model, running on a multi-core CPU using batches of 128 sentences on a generic unoptimized decoder, can parse over 120 sentences from WSJ per second for sentences of all lengths (using beam-size 1). This is better than the speed reported for this batch size in Figure 4 of [17] at 100 sentences per second, even though they run on a GPU and only on sentences of under 40 words. Note that they achieve 89.7 F1 score on this subset of sentences of section 22, while our model at beam-size 1 achieves a score of 93.2 on this subset. 6 Figure 4: Attention matrix. Shown on top is the attention matrix where each column is the attention vector over the inputs. On the bottom, we show outputs for four consecutive time steps, where the attention mask moves to the right. As can be seen, every time a terminal node is consumed, the attention pointer moves to the right. 4 Analysis As shown in this paper, the attention mechanism was a key component especially when learning from a relatively small dataset. We found that the model did not overfit and learned the parsing function from scratch much faster, which resulted in a model which generalized much better than the plain LSTM without attention. One of the most interesting aspects of attention is that it allows us to visualize to interpret what the model has learned from the data. For example, in [2] it is shown that for translation, attention learns an alignment function, which certainly should help translating from English to French. Figure 4 shows an example of the attention model trained only on the WSJ dataset. From the attention matrix, where each column is the attention vector over the inputs, it is clear that the model focuses quite sharply on one word as it produces the parse tree. It is also clear that the focus moves from the first word to the last monotonically, and steps to the right deterministically when a word is consumed. On the bottom of Figure 4 we see where the model attends (black arrow), and the current output being decoded in the tree (black circle). This stack procedure is learned from data (as all the parameters are randomly initialized), but is not quite a simple stack decoding. Indeed, at the input side, if the model focuses on position i, that state has information for all words after i (since we also reverse the inputs). It is worth noting that, in some examples (not shown here), the model does skip words. 7 5 Related Work The task of syntactic constituency parsing has received a tremendous amount of attention in the last 20 years. Traditional approaches to constituency parsing rely on probabilistic context-free grammars (CFGs). The focus in these approaches is on devising appropriate smoothing techniques for highly lexicalized and thus rare events [18] or carefully crafting the model structure [19]. [12] partially alleviate the heavy reliance on manual modeling of linguistic structure by using latent variables to learn a more articulated model. However, their model still depends on a CFG backbone and is thereby potentially restricted in its capacity. Early neural network approaches to parsing, for example by [20, 21] also relied on strong linguistic insights. [22] introduced Incremental Sigmoid Belief Networks for syntactic parsing. By constructing the model structure incrementally, they are able to avoid making strong independence assumptions but inference becomes intractable. To avoid complex inference methods, [23] propose a recurrent neural network where parse trees are decomposed into a stack of independent levels. Unfortunately, this decomposition breaks for long sentences and their accuracy on longer sentences falls quite significantly behind the state-of-the-art. [24] used a tree-structured neural network to score candidate parse trees. Their model however relies again on the CFG assumption and furthermore can only be used to score candidate trees rather than for full inference. Our LSTM model significantly differs from all these models, as it makes no assumptions about the task. As a sequence-to-sequence prediction model it is somewhat related to the incremental parsing models, pioneered by [25] and extended by [26]. Such linear time parsers however typically need some task-specific constraints and might build up the parse in multiple passes. Relatedly, [13] present excellent parsing results with a single left-to-right pass, but require a stack to explicitly delay making decisions and a parsing-specific transition strategy in order to achieve good parsing accuracies. The LSTM in contrast uses its short term memory to model the complex underlying structure that connects the input-output pairs. Recently, researchers have developed a number of neural network models that can be applied to general sequence-to-sequence problems. [27] was the first to propose a differentiable attention mechanism for the general problem of handwritten text synthesis, although his approach assumed a monotonic alignment between the input and output sequences. Later, [2] introduced a more general attention model that does not assume a monotonic alignment, and applied it to machine translation, and [28] applied the same model to speech recognition. [29] used a convolutional neural network to encode a variable-sized input sentence into a vector of a fixed dimension and used a RNN to produce the output sentence. Essentially the same model has been used by [30] to successfully learn to generate image captions. Finally, already in 1990 [31] experimented with applying recurrent neural networks to the problem of syntactic parsing. 6 Conclusions In this work, we have shown that generic sequence-to-sequence approaches can achieve excellent results on syntactic constituency parsing with relatively little effort or tuning. In addition, while we found the model of Sutskever et al. [1] to not be particularly data efficient, the attention model of Bahdanau et al. [2] was found to be highly data efficient, as it has matched the performance of the BerkeleyParser when trained on a small human-annotated parsing dataset. Finally, we showed that synthetic datasets with imperfect labels can be highly useful, as our models have substantially outperformed the models that have been used to create their training data. We suspect it is the case due to the different natures of the teacher model and the student model: the student model has likely viewed the teacher’s errors as noise which it has been able to ignore. This approach was so successful that we obtained a new state-of-the-art result in syntactic constituency parsing with a single attention model, which also means that the model is exceedingly fast. This work shows that domain independent models with excellent learning algorithms can match and even outperform domain specific models. Acknowledgement. We would like to thank Amin Ahmad, Dan Bikel and Jonni Kanerva. 8 References [1] Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014. [2] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [3] Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. Addressing the rare word problem in neural machine translation. arXiv preprint arXiv:1410.8206, 2014. [4] S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007, 2014. [5] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735– 1780, 1997. [6] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [7] Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. Ontonotes: The 90% solution. In NAACL. ACL, June 2006. [8] Slav Petrov and Ryan McDonald. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL), 2012. [9] John Judge, Aoife Cahill, and Josef van Genabith. Questionbank: Creating a corpus of parse-annotated questions. In Proceedings of ICCL & ACL’06, pages 497–504. ACL, July 2006. [10] Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2):313–330, 1993. [11] Zhenghua Li, Min Zhang, and Wenliang Chen. Ambiguity-aware ensemble training for semi-supervised dependency parsing. In Proceedings of ACL’14, pages 457–467. ACL, 2014. [12] Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. Learning accurate, compact, and interpretable tree annotation. In ACL. ACL, July 2006. [13] Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. Fast and accurate shift-reduce constituent parsing. In ACL. ACL, August 2013. [14] Slav Petrov. Products of random latent variable grammars. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL, pages 19–27. ACL, June 2010. [15] Zhongqiang Huang and Mary Harper. Self-training PCFG grammars with latent annotations across languages. In EMNLP. ACL, August 2009. [16] David McClosky, Eugene Charniak, and Mark Johnson. Effective self-training for parsing. In NAACL. ACL, June 2006. [17] David Hall, Taylor Berg-Kirkpatrick, John Canny, and Dan Klein. Sparser, better, faster gpu parsing. In ACL, 2014. [18] Michael Collins. Three generative, lexicalised models for statistical parsing. In Proceedings of the 35th Annual Meeting of the ACL, pages 16–23. ACL, July 1997. [19] Dan Klein and Christopher D. Manning. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the ACL, pages 423–430. ACL, July 2003. [20] James Henderson. Inducing history representations for broad coverage statistical parsing. In NAACL, May 2003. [21] James Henderson. Discriminative training of a neural network statistical parser. In Proceedings of the 42nd Meeting of the ACL (ACL’04), Main Volume, pages 95–102, July 2004. [22] Ivan Titov and James Henderson. Constituent parsing with incremental sigmoid belief networks. In ACL. ACL, June 2007. [23] Ronan Collobert. Deep learning for efficient discriminative parsing. In International Conference on Artificial Intelligence and Statistics, 2011. [24] Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. Parsing natural scenes and natural language with recursive neural networks. In ICML, 2011. [25] Adwait Ratnaparkhi. A linear observed time statistical parser based on maximum entropy models. In Second Conference on Empirical Methods in Natural Language Processing, 1997. [26] Michael Collins and Brian Roark. Incremental parsing with the perceptron algorithm. In Proceedings of the 42nd Meeting of the ACL (ACL’04), Main Volume, pages 111–118, July 2004. [27] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [28] Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. End-to-end continuous speech recognition using attention-based recurrent nn: First results. arXiv preprint arXiv:1412.1602, 2014. [29] Nal Kalchbrenner and Phil Blunsom. Recurrent continuous translation models. In EMNLP, pages 1700– 1709, 2013. [30] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. arXiv preprint arXiv:1411.4555, 2014. [31] Zoubin Ghahramani. A neural network for learning how to parse tree adjoining grammar. B.S.Eng Thesis, University of Pennsylvania, 1990. 9 | 2015 | 61 |
5,952 | Practical and Optimal LSH for Angular Distance Alexandr Andoni∗ Columbia University Piotr Indyk MIT Thijs Laarhoven TU Eindhoven Ilya Razenshteyn MIT Ludwig Schmidt MIT Abstract We show the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the asymptotically optimal running time exponent. Unlike earlier algorithms with this property (e.g., Spherical LSH [1, 2]), our algorithm is also practical, improving upon the well-studied hyperplane LSH [3] in practice. We also introduce a multiprobe version of this algorithm and conduct an experimental evaluation on real and synthetic data sets. We complement the above positive results with a fine-grained lower bound for the quality of any LSH family for angular distance. Our lower bound implies that the above LSH family exhibits a trade-off between evaluation time and quality that is close to optimal for a natural class of LSH functions. 1 Introduction Nearest neighbor search is a key algorithmic problem with applications in several fields including computer vision, information retrieval, and machine learning [4]. Given a set of n points P ⊂Rd, the goal is to build a data structure that answers nearest neighbor queries efficiently: for a given query point q ∈Rd, find the point p ∈P that is closest to q under an appropriately chosen distance metric. The main algorithmic design goals are usually a fast query time, a small memory footprint, and—in the approximate setting—a good quality of the returned solution. There is a wide range of algorithms for nearest neighbor search based on techniques such as space partitioning with indexing, as well as dimension reduction or sketching [5]. A popular method for point sets in high-dimensional spaces is Locality-Sensitive Hashing (LSH) [6, 3], an approach that offers a provably sub-linear query time and sub-quadratic space complexity, and has been shown to achieve good empirical performance in a variety of applications [4]. The method relies on the notion of locality-sensitive hash functions. Intuitively, a hash function is locality-sensitive if its probability of collision is higher for “nearby” points than for points that are “far apart”. More formally, two points are nearby if their distance is at most r1, and they are far apart if their distance is at least r2 = c · r1, where c > 1 quantifies the gap between “near” and “far”. The quality of a hash function is characterized by two key parameters: p1 is the collision probability for nearby points, and p2 is the collision probability for points that are far apart. The gap between p1 and p2 determines how “sensitive” the hash function is to changes in distance, and this property is captured by the parameter ρ = log 1/p1 log 1/p2 , which can usually be expressed as a function of the distance gap c. The problem of designing good locality-sensitive hash functions and LSH-based efficient nearest neighbor search algorithms has attracted significant attention over the last few years. ∗The authors are listed in alphabetical order. 1 In this paper, we focus on LSH for the Euclidean distance on the unit sphere, which is an important special case for several reasons. First, the spherical case is relevant in practice: Euclidean distance on a sphere corresponds to the angular distance or cosine similarity, which are commonly used in applications such as comparing image feature vectors [7], speaker representations [8], and tf-idf data sets [9]. Moreover, on the theoretical side, the paper [2] shows a reduction from Nearest Neighbor Search in the entire Euclidean space to the spherical case. These connections lead to a natural question: what are good LSH families for this special case? On the theoretical side, the recent work of [1, 2] gives the best known provable guarantees for LSHbased nearest neighbor search w.r.t. the Euclidean distance on the unit sphere. Specifically, their algorithm has a query time of O(nρ) and space complexity of O(n1+ρ) for ρ = 1 2c2−1.1 E.g., for the approximation factor c = 2, the algorithm achieves a query time of n1/7+o(1). At the heart of the algorithm is an LSH scheme called Spherical LSH, which works for unit vectors. Its key property is that it can distinguish between distances r1 = √ 2/c and r2 = √ 2 with probabilities yielding ρ = 1 2c2−1 (the formula for the full range of distances is more complex and given in Section 3). Unfortunately, the scheme as described in the paper is not applicable in practice as it is based on rather complex hash functions that are very time consuming to evaluate. E.g., simply evaluating a single hash function from [2] can take more time than a linear scan over 106 points. Since an LSH data structure contains many individual hash functions, using their scheme would be slower than a simple linear scan over all points in P unless the number of points n is extremely large. On the practical side, the hyperplane LSH introduced in the influential work of Charikar [3] has worse theoretical guarantees, but works well in practice. Since the hyperplane LSH can be implemented very efficiently, it is the standard hash function in practical LSH-based nearest neighbor algorithms2 and the resulting implementations has been shown to improve over a linear scan on real data by multiple orders of magnitude [14, 9]. The aforementioned discrepancy between the theory and practice of LSH raises an important question: is there a locality-sensitive hash function with optimal guarantees that also improves over the hyperplane LSH in practice? In this paper we show that there is a family of locality-sensitive hash functions that achieves both objectives. Specifically, the hash functions match the theoretical guarantee of Spherical LSH from [2] and, when combined with additional techniques, give better experimental results than the hyperplane LSH. More specifically, our contributions are: Theoretical guarantees for the cross-polytope LSH. We show that a hash function based on randomly rotated cross-polytopes (i.e., unit balls of the ℓ1-norm) achieves the same parameter ρ as the Spherical LSH scheme in [2], assuming data points are unit vectors. While the cross-polytope LSH family has been proposed by researchers before [15, 16] we give the first theoretical analysis of its performance. Fine-grained lower bound for cosine similarity LSH. To highlight the difficulty of obtaining optimal and practical LSH schemes, we prove the first non-asymptotic lower bound on the trade-off between the collision probabilities p1 and p2. So far, the optimal LSH upper bound ρ = 1 2c2−1 (from [1, 2] and cross-polytope from here) attain this bound only in the limit, as p1, p2 →0. Very small p1 and p2 are undesirable since the hash evaluation time is often proportional to 1/p2. Our lower bound proves this is unavoidable: if we require p2 to be large, ρ has to be suboptimal. This result has two important implications for designing practical hash functions. First, it shows that the trade-offs achieved by the cross-polytope LSH and the scheme of [1, 2] are essentially optimal. Second, the lower bound guides design of future LSH functions: if one is to significantly improve upon the cross-polytope LSH, one has to design a hash function that is computed more efficiently than by explicitly enumerating its range (see Section 4 for a more detailed discussion). Multiprobe scheme for the cross-polytope LSH. The space complexity of an LSH data structure is sub-quadratic, but even this is often too large (i.e., strongly super-linear in the number of points), 1This running time is known to be essentially optimal for a large class of algorithms [10, 11]. 2Note that if the data points are binary, more efficient LSH schemes exist [12, 13]. However, in this paper we consider algorithms for general (non-binary) vectors. 2 and several methods have been proposed to address this issue. Empirically, the most efficient scheme is multiprobe LSH [14], which leads to a significantly reduced memory footprint for the hyperplane LSH. In order to make the cross-polytope LSH competitive in practice with the multiprobe hyperplane LSH, we propose a novel multiprobe scheme for the cross-polytope LSH. We complement these contributions with an experimental evaluation on both real and synthetic data (SIFT vectors, tf-idf data, and a random point set). In order to make the cross-polytope LSH practical, we combine it with fast pseudo-random rotations [17] via the Fast Hadamard Transform, and feature hashing [18] to exploit sparsity of data. Our results show that for data sets with around 105 to 108 points, our multiprobe variant of the cross-polytope LSH is up to 10× faster than an efficient implementation of the hyperplane LSH, and up to 700× faster than a linear scan. To the best of our knowledge, our combination of techniques provides the first “exponent-optimal” algorithm that empirically improves over the hyperplane LSH in terms of query time for an exact nearest neighbor search. 1.1 Related work The cross-polytope LSH functions were originally proposed in [15]. However, the analysis in that paper was mostly experimental. Specifically, the probabilities p1 and p2 of the proposed LSH functions were estimated empirically using the Monte Carlo method. Similar hash functions were later proposed in [16]. The latter paper also uses DFT to speed-up the random matrix-vector matrix multiplication operation. Both of the aforementioned papers consider only the single-probe algorithm. There are several works that show lower bounds on the quality of LSH hash functions [19, 10, 20, 11]. However, those papers provide only a lower bound on the ρ parameter for asymptotic values of p1 and p2, as opposed to an actual trade-off between these two quantities. In this paper we provide such a trade-off, with implications as outlined in the introduction. 2 Preliminaries We use ∥.∥to denote the Euclidean (a.k.a. ℓ2) norm on Rd. We also use Sd−1 to denote the unit sphere in Rd centered in the origin. The Gaussian distribution with mean zero and variance of one is denoted by N(0, 1). Let µ be a normalized Haar measure on Sd−1 (that is, µ(Sd−1) = 1). Note that µ it corresponds to the uniform distribution over Sd−1. We also let u ∼Sd−1 be a point sampled from Sd−1 uniformly at random. For η ∈R we denote Φc(η) = Pr X∼N(0,1)[X ≥η] = 1 √ 2π Z ∞ η e−t2/2 dt. We will be interested in the Near Neighbor Search on the sphere Sd−1 with respect to the Euclidean distance. Note that the angular distance can be expressed via the Euclidean distance between normalized vectors, so our results apply to the angular distance as well. Definition 1. Given an n-point dataset P ⊂Sd−1 on the sphere, the goal of the (c, r)-Approximate Near Neighbor problem (ANN) is to build a data structure that, given a query q ∈Sd−1 with the promise that there exists a datapoint p ∈P with ∥p −q∥≤r, reports a datapoint p′ ∈P within distance cr from q. Definition 2. We say that a hash family H on the sphere Sd−1 is (r1, r2, p1, p2)-sensitive, if for every p, q ∈Sd−1 one has Pr h∼H[h(x) = h(y)] ≥p1 if ∥x −y∥≤r1, and Pr h∼H[h(x) = h(y)] ≤p2 if ∥x −y∥≥r2, It is known [6] that an efficient (r, cr, p1, p2)-sensitive hash family implies a data structure for (c, r)ANN with space O(n1+ρ/p1 + dn) and query time O(d · nρ/p1), where ρ = log(1/p1) log(1/p2). 3 Cross-polytope LSH In this section, we describe the cross-polytope LSH, analyze it, and show how to make it practical. First, we recall the definition of the cross-polytope LSH [15]: Consider the following hash family 3 H for points on a unit sphere Sd−1 ⊂Rd. Let A ∈Rd×d be a random matrix with i.i.d. Gaussian entries (“a random rotation”). To hash a point x ∈Sd−1, we compute y = Ax/∥Ax∥∈Sd−1 and then find the point closest to y from {±ei}1≤i≤d, where ei is the i-th standard basis vector of Rd. We use the closest neighbor as a hash of x. The following theorem bounds the collision probability for two points under the above family H. Theorem 1. Suppose that p, q ∈Sd−1 are such that ∥p −q∥= τ, where 0 < τ < 2. Then, ln 1 Pr h∼H h(p) = h(q) = τ 2 4 −τ 2 · ln d + Oτ(ln ln d) . Before we show how to prove this theorem, we briefly describe its implications. Theorem 1 shows that the cross-polytope LSH achieves essentially the same bounds on the collision probabilities as the (theoretically) optimal LSH for the sphere from [2] (see Section “Spherical LSH” there). In particular, substituting the bounds from Theorem 1 for the cross-polytope LSH into the standard reduction from Near Neighbor Search to LSH [6], we obtain the following data structure with sub-quadratic space and sublinear query time for Near Neighbor Search on a sphere. Corollary 1. The (c, r)-ANN on a unit sphere Sd−1 can be solved in space O(n1+ρ +dn) and query time O(d · nρ), where ρ = 1 c2 · 4−c2r2 4−r2 + o(1) . We now outline the proof of Theorem 1. For the full proof, see Appendix B. Due to the spherical symmetry of Gaussians, we can assume that p = e1 and q = αe1 + βe2, where α, β are such that α2 + β2 = 1 and (α −1)2 + β2 = τ 2. Then, we expand the collision probability: Pr h∼H[h(p) = h(q)] = 2d · Pr h∼H[h(p) = h(q) = e1] = 2d · Pr u,v∼N(0,1)d[∀i |ui| ≤u1 and |αui + βvi| ≤αu1 + βv1] = 2d · E X1,Y1 Pr X2,Y2 h |X2| ≤X1 and |αX2 + βY2| ≤αX1 + βY1 id−1 , (1) where X1, Y1, X2, Y2 ∼N(0, 1). Indeed, the first step is due to the spherical symmetry of the hash family, the second step follows from the above discussion about replacing a random orthogonal matrix with a Gaussian one and that one can assume w.l.o.g. that p = e1 and q = αe1 + βe2; the last step is due to the independence of the entries of u and v. Thus, proving Theorem 1 reduces to estimating the right-hand side of (1). Note that the probability Pr[|X2| ≤X1 and |αX2+βY2| ≤αX1+βY1] is equal to the Gaussian area of the planar set SX1,Y1 shown in Figure 1a. The latter is heuristically equal to 1 −e−∆2/2, where ∆is the distance from the origin to the complement of SX1,Y1, which is easy to compute (see Appendix A for the precise statement of this argument). Using this estimate, we compute (1) by taking the outer expectation. 3.1 Making the cross-polytope LSH practical As described above, the cross-polytope LSH is not quite practical. The main bottleneck is sampling, storing, and applying a random rotation. In particular, to multiply a random Gaussian matrix with a vector, we need time proportional to d2, which is infeasible for large d. Pseudo-random rotations. To rectify this issue, we instead use pseudo-random rotations. Instead of multiplying an input vector x by a random Gaussian matrix, we apply the following linear transformation: x 7→HD3HD2HD1x, where H is the Hadamard transform, and Di for i ∈{1, 2, 3} is a random diagonal ±1-matrix. Clearly, this is an orthogonal transformation, which one can store in space O(d) and evaluate in time O(d log d) using the Fast Hadamard Transform. This is similar to pseudo-random rotations used in the context of LSH [21], dimensionality reduction [17], or compressed sensing [22]. While we are currently not aware how to prove rigorously that such pseudorandom rotations perform as well as the fully random ones, empirical evaluations show that three applications of HDi are exactly equivalent to applying a true random rotation (when d tends to infinity). We note that only two applications of HDi are not sufficient. 4 Figure 1 x = −X1 x = X1 αx + βy = αX1 + βY1 αx + βy = −(αX1 + βY1) (a) The set appearing in the analysis of the crosspolytope LSH: SX1Y1 = {|x| ≤X1 and |αx + βy| ≤αX1 + βY1}. 1016 1012 108 104 100 0.15 0.2 0.25 0.3 0.35 0.4 Number of parts T Sensitivity ρ Cross-polytope LSH Lower bound (b) Trade-off between ρ and the number of parts for distances √ 2/2 and √ 2 (approximation c = 2); both bounds tend to 1/7 (see discussion in Section 4). Feature hashing. While we can apply a pseudo-random rotation in time O(d log d), even this can be too slow. E.g., consider an input vector x that is sparse: the number of non-zero entries of x is s much smaller than d. In this case, we can evaluate the hyperplane LSH from [3] in time O(s), while computing the cross-polytope LSH (even with pseudo-random rotations) still takes time O(d log d). To speed-up the cross-polytope LSH for sparse vectors, we apply feature hashing [18]: before performing a pseudo-random rotation, we reduce the dimension from d to d′ ≪d by applying a linear map x 7→Sx, where S is a random sparse d′ × d matrix, whose columns have one non-zero ±1 entry sampled uniformly. This way, the evaluation time becomes O(s + d′ log d′). 3 “Partial” cross-polytope LSH. In the above discussion, we defined the cross-polytope LSH as a hash family that returns the closest neighbor among {±ei}1≤i≤d as a hash (after a (pseudo-)random rotation). In principle, we do not have to consider all d basis vectors when computing the closest neighbor. By restricting the hash to d′ ≤d basis vectors instead, Theorem 1 still holds for the new hash family (with d replaced by d′) since the analysis is essentially dimension-free. This slight generalization of the cross-polytope LSH turns out to be useful for experiments (see Section 6). Note that the case d′ = 1 corresponds to the hyperplane LSH. 4 Lower bound Let H be a hash family on Sd−1. For 0 < r1 < r2 < 2 we would like to understand the trade-off between p1 and p2, where p1 is the smallest probability of collision under H for points at distance at most r1 and p2 is the largest probability of collision for points at distance at least r2. We focus on the case r2 ≈ √ 2 because setting r2 to √ 2 −o(1) (as d tends to infinity) allows us to replace p2 with the following quantity that is somewhat easier to handle: p∗ 2 = Pr h∼H u,v∼Sd−1 [h(u) = h(v)]. This quantity is at most p2 + o(1), since the distance between two random points on a unit sphere Sd−1 is tightly concentrated around √ 2. So for a hash family H on a unit sphere Sd−1, we would like to understand the upper bound on p1 in terms of p∗ 2 and 0 < r1 < √ 2. For 0 ≤τ ≤ √ 2 and η ∈R, we define Λ(τ, η) = Pr X,Y ∼N(0,1) " X ≥η and 1 −τ 2 2 · X + r τ 2 −τ 4 4 · Y ≥η # . Pr X∼N(0,1)[X ≥η] . 3Note that one can apply Lemma 2 from the arXiv version of [18] to claim that—after such a dimension reduction—the distance between any two points remains sufficiently concentrated for the bounds from Theorem 1 to still hold (with d replaced by d′). 5 We are now ready to formulate the main result of this section. Theorem 2. Let H be a hash family on Sd−1 such that every function in H partitions the sphere into at most T parts of measure at most 1/2. Then we have p1 ≤Λ(r1, η) + o(1), where η ∈R is such that Φc(η) = p∗ 2 and o(1) is a quantity that depends on T and r1 and tends to 0 as d tends to infinity. The idea of the proof is first to reason about one part of the partition using the isoperimetric inequality from [23], and then to apply a certain averaging argument by proving concavity of a function related to Λ using a delicate analytic argument. For the full proof, see Appendix C. We note that the above requirement of all parts induced by H having measure at most 1/2 is only a technicality. We conjecture that Theorem 2 holds without this restriction. In any case, as we will see below, in the interesting range of parameters this restriction is essentially irrelevant. One can observe that if every hash function in H partitions the sphere into at most T parts, then p∗ 2 ≥1 T (indeed, p∗ 2 is precisely the average sum of squares of measures of the parts). This observation, combined with Theorem 2, leads to the following interesting consequence. Specifically, we can numerically estimate Λ in order to give a lower bound on ρ = log(1/p1) log(1/p2) for any hash family H in which every function induces at most T parts of measure at most 1/2. See Figure 1b, where we plot this lower bound for r1 = √ 2/2,4 together with an upper bound that is given by the cross-polytope LSH5 (for which we use numerical estimates for (1)). We can make several conclusions from this plot. First, the cross-polytope LSH gives an almost optimal trade-off between ρ and T. Given that the evaluation time for the cross-polytope LSH is O(T log T) (if one uses pseudo-random rotations), we conclude that in order to improve upon the cross-polytope LSH substantially in practice, one should design an LSH family with ρ being close to optimal and evaluation time that is sublinear in T. We note that none of the known LSH families for a sphere has been shown to have this property. This direction looks especially interesting since the convergence of ρ to the optimal value (as T tends to infinity) is extremely slow (for instance, according to Figure 1b, for r1 = √ 2/2 and r2 ≈ √ 2 we need more than 105 parts to achieve ρ ≤0.2, whereas the optimal ρ is 1/7 ≈0.143). 5 Multiprobe LSH for the cross-polytope LSH We now describe our multiprobe scheme for the cross-polytope LSH, which is a method for reducing the number of independent hash tables in an LSH data structure. Given a query point q, a “standard” LSH data structure considers only a single cell in each of the L hash tables (the cell is given by the hash value hi(q) for i ∈[L]). In multiprobe LSH, we consider candidates from multiple cells in each table [14]. The rationale is the following: points p that are close to q but fail to collide with q under hash function hi are still likely to hash to a value that is close to hi(q). By probing multiple hash locations close to hi(q) in the same table, multiprobe LSH achieves a given probability of success with a smaller number of hash tables than “standard” LSH. Multiprobe LSH has been shown to perform well in practice [14, 24]. The main ingredient in multiprobe LSH is a probing scheme for generating and ranking possible modifications of the hash value hi(q). The probing scheme should be computationally efficient and ensure that more likely hash locations are probed first. For a single cross-polytope hash, the order of alternative hash values is straightforward: let x be the (pseudo-)randomly rotated version of query point q. Recall that the “main” hash value is hi(q) = arg maxj∈[d] |xj|.6 Then it is easy to see that the second highest probability of collision is achieved for the hash value corresponding to the coordinate with the second largest absolute value, etc. Therefore, we consider the indices i ∈[d] sorted by their absolute value as our probing sequence or “ranking” for a single cross-polytope. The remaining question is how to combine multiple cross-polytope rankings when we have more than one hash function. As in the analysis of the cross-polytope LSH (see Section 3, we consider two points q = e1 and p = αe1 + βe2 at distance R. Let A(i) be the i.i.d. Gaussian matrix of hash 4The situation is qualitatively similar for other values of r1. 5More specifically, for the “partial” version from Section 3.1, since T should be constant, while d grows 6In order to simplify notation, we consider a slightly modified version of the cross-polytope LSH that maps both the standard basis vector +ej and its opposite −ej to the same hash value. It is easy to extend the multiprobe scheme defined here to the “full” cross-polytope LSH from Section 3. 6 function hi, and let x(i) = A(i)e1 be the randomly rotated version of point q. Given x(i), we are interested in the probability of p hashing to a certain combination of the individual cross-polytope rankings. More formally, let r(i) vi be the index of the vi-th largest element of |x(i)|, where v ∈[d]k specifies the alternative probing location. Then we would like to compute Pr A(1),...,A(k) hi(p) = r(i) vi for all i ∈[k] | A(i)q = x(i) = k Y i=1 Pr A(i) h arg max j∈[d] (α · A(i)e1 + β · A(i)e2)j = r(i) vi A(i)e1 = x(i)i . If we knew this probability for all v ∈[d]k, we could sort the probing locations by their probability. We now show how to approximate this probability efficiently for a single value of i (and hence drop the superscripts to simplify notation). WLOG, we permute the rows of A so that rv = v and get Pr A h arg max j∈[d] (αx + β · Ae2)j = v Ae1 = x i = Pr y∼N(0,Id) h arg max j∈[d] (x + β α · y)j = v i . The RHS is the Gaussian measure of the set S = {y ∈Rd | arg maxj∈[d] (x + β αy)j = v}. Similar to the analysis of the cross-polytope LSH, we approximate the measure of S by its distance to the origin. Then the probability of probing location v is proportional to exp(−∥yx,v∥2), where yx,v is the shortest vector y such that arg maxj |x+y|j = v. Note that the factor β/α becomes a proportionality constant, and hence the probing scheme does not require to know the distance R. For computational performance and simplicity, we make a further approximation and use yx,v = (maxi |xi| −|xv|) · ev, i.e., we only consider modifying a single coordinate to reach the set S. Once we have estimated the probabilities for each vi ∈[d], we incrementally construct the probing sequence using a binary heap, similar to the approach in [14]. For a probing sequence of length m, the resulting algorithm has running time O(L·d log d+m log m). In our experiments, we found that the O(L·d log d) time taken to sort the probing candidates vi dominated the running time of the hash function evaluation. In order to circumvent this issue, we use an incremental sorting approach that only sorts the relevant parts of each cross-polytope and gives a running time of O(L · d + m log m). 6 Experiments We now show that the cross-polytope LSH, combined with our multiprobe extension, leads to an algorithm that is also efficient in practice and improves over the hyperplane LSH on several data sets. The focus of our experiments is the query time for an exact nearest neighbor search. Since hyperplane LSH has been compared to other nearest-neighbor algorithms before [8], we limit our attention to the relative speed-up compared with hyperplane hashing. We evaluate the two hashing schemes on three types of data sets. We use a synthetic data set of randomly generated points because this allows us to vary a single problem parameter while keeping the remaining parameters constant. We also investigate the performance of our algorithm on real data: two tf-idf data sets [25] and a set of SIFT feature vectors [7]. We have chosen these data sets in order to illustrate when the cross-polytope LSH gives large improvements over the hyperplane LSH, and when the improvements are more modest. See Appendix D for a more detailed description of the data sets and our experimental setup (implementation details, CPU, etc.). In all experiments, we set the algorithm parameters so that the empirical probability of successfully finding the exact nearest neighbor is at least 0.9. Moreover, we set the number of LSH tables L so that the amount of additional memory occupied by the LSH data structure is comparable to the amount of memory necessary for storing the data set. We believe that this is the most interesting regime because significant memory overheads are often impossible for large data sets. In order to determine the parameters that are not fixed by the above constraints, we perform a grid search over the remaining parameter space and report the best combination of parameters. For the cross-polytope hash, we consider “partial” cross-polytopes in the last of the k hash functions in order to get a smooth trade-off between the various parameters (see Section 3.1). Multiprobe experiments. In order to demonstrate that the multiprobe scheme is critical for making the cross-polytope LSH competitive with hyperplane hashing, we compare the performance of a 7 Data set Method Query time (ms) Speed-up vs HP Best k Number of candidates Hashing time (ms) Distances time (ms) NYT HP 120 ms 19 57,200 16 96 NYT CP 35 ms 3.4× 2 (64) 17,900 3.0 30 pubmed HP 857 ms 20 1,481,000 36 762 pubmed CP 213 ms 4.0× 2 (512) 304,000 18 168 SIFT HP 3.7 ms 30 18,600 0.2 3.0 SIFT CP 3.1 ms 1.2× 6 (1) 13,400 0.6 2.2 Table 1: Average running times for a single nearest neighbor query with the hyperplane (HP) and cross-polytope (CP) algorithms on three real data sets. The cross-polytope LSH is faster than the hyperplane LSH on all data sets, with significant speed-ups for the two tf-idf data sets NYT and pubmed. For the cross-polytope LSH, the entries for k include both the number of individual hash functions per table and (in parenthesis) the dimension of the last of the k cross-polytopes. “standard” cross-polytope LSH data structure with our multiprobe variant on an instance of the random data set (n = 220, d = 128). As can be seen in Table 2 (Appendix D), the multiprobe variant is about 13× faster in our memory-constrained setting (L = 10). Note that in all of the following experiments, the speed-up of the multiprobe cross-polytope LSH compared to the multiprobe hyperplane LSH is less than 11×. Hence without our multiprobe addition, the cross-polytope LSH would be slower than the hyperplane LSH, for which a multiprobe scheme is already known [14]. Experiments on random data. Next, we show that the better time complexity of the crosspolytope LSH already applies for moderate values of n. In particular, we compare the cross-polytope LSH, combined with fast rotations (Section 3.1) and our multiprobe scheme, to a multi-probe hyperplane LSH on random data. We keep the dimension d = 128 and the distance to the nearest neighbor R = √ 2/2 fixed, and vary the size of the data set from 220 to 228. The number of hash tables L is set to 10. For 220 points, the cross-polytope LSH is already 3.5× faster than the hyperplane LSH, and for n = 228 the speedup is 10.3× (see Table 3 in Appendix D). Compared to a linear scan, the speed-up achieved by the cross-polytope LSH ranges from 76× for n = 220 to about 700× for n = 228. Experiments on real data. On the SIFT data set (n = 106 and d = 128), the cross-polytope LSH achieves a modest speed-up of 1.2× compared to the hyperplane LSH (see Table 1). On the other hand, the speed-up is is 3 −4× on the two tf-idf data sets, which is a significant improvement considering the relatively small size of the NYT data set (n ≈300, 000). One important difference between the data sets is that the typical distance to the nearest neighbor is smaller in the SIFT data set, which can make the nearest neighbor problem easier (see Appendix D). Since the tf-idf data sets are very high-dimensional but sparse (d ≈100, 000), we use the feature hashing approach described in Section 3.1 in order to reduce the hashing time of the cross-polytope LSH (the standard hyperplane LSH already runs in time proportional to the sparsity of a vector). We use 1024 and 2048 as feature hashing dimensions for NYT and pubmed, respectively. Acknowledgments We thank Michael Kapralov for many valuable discussions during various stages of this work. We also thank Stefanie Jegelka and Rasmus Pagh for helpful conversations. This work was supported in part by the NSF and the Simons Foundation. Work done in part while the first author was at the Simons Institute for the Theory of Computing. 8 References [1] Alexandr Andoni, Piotr Indyk, Huy L. Nguyen, and Ilya Razenshteyn. Beyond locality-sensitive hashing. In SODA, 2014. Full version at http://arxiv.org/abs/1306.1547. [2] Alexandr Andoni and Ilya Razenshteyn. Optimal data-dependent hashing for approximate near neighbors. In STOC, 2015. Full version at http://arxiv.org/abs/1501.01062. [3] Moses S. Charikar. Similarity estimation techniques from rounding algorithms. In STOC, 2002. [4] Gregory Shakhnarovich, Trevor Darrell, and Piotr Indyk. Nearest-Neighbor Methods in Learning and Vision: Theory and Practice. MIT Press, Cambridge, MA, 2005. [5] Hanan Samet. Foundations of multidimensional and metric data structures. Morgan Kaufmann, 2006. [6] Sariel Har-Peled, Piotr Indyk, and Rajeev Motwani. Approximate nearest neighbor: Towards removing the curse of dimensionality. Theory of Computing, 8(14):321–350, 2012. [7] Hervé Jégou, Matthijs Douze, and Cordelia Schmid. Product quantization for nearest neighbor search. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(1):117–128, 2011. [8] Ludwig Schmidt, Matthew Sharifi, and Ignacio Lopez Moreno. Large-scale speaker identification. In ICASSP, 2014. [9] Narayanan Sundaram, Aizana Turmukhametova, Nadathur Satish, Todd Mostak, Piotr Indyk, Samuel Madden, and Pradeep Dubey. Streaming similarity search over one billion tweets using parallel localitysensitive hashing. In VLDB, 2013. [10] Moshe Dubiner. Bucketing coding and information theory for the statistical high-dimensional nearestneighbor problem. IEEE Transactions on Information Theory, 56(8):4166–4179, 2010. [11] Alexandr Andoni and Ilya Razenshteyn. Tight lower bounds for data-dependent locality-sensitive hashing, 2015. Available at http://arxiv.org/abs/1507.04299. [12] Anshumali Shrivastava and Ping Li. Fast near neighbor search in high-dimensional binary data. In Machine Learning and Knowledge Discovery in Databases, pages 474–489. Springer, 2012. [13] Anshumali Shrivastava and Ping Li. Densifying one permutation hashing via rotation for fast near neighbor search. In ICML, 2014. [14] Qin Lv, William Josephson, Zhe Wang, Moses Charikar, and Kai Li. Multi-probe lsh: efficient indexing for high-dimensional similarity search. In VLDB, 2007. [15] Kengo Terasawa and Yuzuru Tanaka. Spherical lsh for approximate nearest neighbor search on unit hypersphere. In Algorithms and Data Structures, pages 27–38. Springer, 2007. [16] Kave Eshghi and Shyamsundar Rajaram. Locality sensitive hash functions based on concomitant rank order statistics. In KDD, 2008. [17] Nir Ailon and Bernard Chazelle. The fast Johnson–Lindenstrauss transform and approximate nearest neighbors. SIAM Journal on Computing, 39(1):302–322, 2009. [18] Kilian Q. Weinberger, Anirban Dasgupta, John Langford, Alexander J. Smola, and Josh Attenberg. Feature hashing for large scale multitask learning. In ICML, 2009. [19] Rajeev Motwani, Assaf Naor, and Rina Panigrahy. Lower bounds on locality sensitive hashing. SIAM Journal on Discrete Mathematics, 21(4):930–935, 2007. [20] Ryan O’Donnell, Yi Wu, and Yuan Zhou. Optimal lower bounds for locality-sensitive hashing (except when q is tiny). ACM Transactions on Computation Theory, 6(1):5, 2014. [21] Anirban Dasgupta, Ravi Kumar, and Tamás Sarlós. Fast locality-sensitive hashing. In KDD, 2011. [22] Nir Ailon and Holger Rauhut. Fast and RIP-optimal transforms. Discrete & Computational Geometry, 52(4):780–798, 2014. [23] Uriel Feige and Gideon Schechtman. On the optimality of the random hyperplane rounding technique for MAX CUT. Random Structures and Algorithms, 20(3):403–440, 2002. [24] Malcolm Slaney, Yury Lifshits, and Junfeng He. Optimal parameters for locality-sensitive hashing. Proceedings of the IEEE, 100(9):2604–2623, 2012. [25] Moshe Lichman. UCI machine learning repository, 2013. [26] Persi Diaconis and David Freedman. A dozen de Finetti-style results in search of a theory. Annales de l’institut Henri Poincaré (B) Probabilités et Statistiques, 23(S2):397–423, 1987. 9 | 2015 | 62 |
5,953 | GP Kernels for Cross-Spectrum Analysis 1Kyle Ulrich, 3David E. Carlson, 2Kafui Dzirasa, 1Lawrence Carin 1Department of Electrical and Computer Engineering, Duke University 2Department of Psychiatry and Behavioral Sciences, Duke University 3Department of Statistics, Columbia University {kyle.ulrich, kafui.dzirasa, lcarin}@duke.edu david.edwin.carlson@gmail.com Abstract Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, Wilson and Adams (2013) proposed the spectral mixture (SM) kernel to model the spectral density of a single task in a Gaussian process framework. In this paper, we develop a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. We demonstrate the expressive capabilities of the CSM kernel through implementation of a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel. Results are presented for measured multi-region electrophysiological data. 1 Introduction Gaussian process (GP) models have become an important component of the machine learning literature. They have provided a basis for non-linear multivariate regression and classification tasks, and have enjoyed much success in a wide variety of applications [16]. A GP places a prior distribution over latent functions, rather than model parameters. In the sense that these functions are defined for any number of sample points and sample positions, as well as any general functional form, GPs are nonparametric. The properties of the latent functions are defined by a positive definite covariance kernel that controls the covariance between the function at any two sample points. Recently, the spectral mixture (SM) kernel was proposed by Wilson and Adams [24] to model a spectral density with a scale-location mixture of Gaussians. This flexible and interpretable class of kernels is capable of recovering any composition of stationary kernels [27, 9, 13]. The SM kernel has been used for GP regression of a scalar output (i.e., single function, or observation “task”), achieving impressive results in extrapolating atmospheric CO2 concentrations [24]; image inpainting [25]; and feature extraction from electrophysiological signals [21]. However, the SM kernel is not defined for multiple outputs (multiple correlated functions). Multioutput GPs intersect with the field of multi-task learning [4], where solving similar problems jointly allows for the transfer of statistical strength between problems, improving learning performance when compared to learning all tasks individually. In this paper, we consider neuroscience applications where low-frequency (< 200 Hz) extracellular potentials are simultaneously recorded from implanted electrodes in multiple brain regions of a mouse [6]. These signals are known as local field potentials (LFPs) and are often highly correlated between channels. Inferring and understanding that interdependence is biologically significant. 1 A multi-output GP can be thought of as a standard GP (all observations are jointly normal) where the covariance kernel is a function of both the input space and the output space (see [2] and references therein for a comprehensive review); here “input space” means the points at which the functions are sampled (e.g., time), and the “output space” may correspond to different brain regions. A particular positive definite form of this multi-output covariance kernel is the sum of separable (SoS) kernels, or the linear model of coregionalization (LMC) in the geostatistics literature [10], where a separable kernel is represented by the product of separate kernels for the input and output spaces. While extending the SM kernel to the multi-output setting via the LMC framework (i.e., the SMLMC kernel) provides a powerful modeling framework, the SM-LMC kernel does not intuitively represent the data. Specifically, the SM-LMC kernel encodes the cross-amplitude spectrum (square root of the cross power spectral density) between every pair of channels, but provides no crossphase information. Together, the cross-amplitude and cross-phase spectra form the cross-spectrum, defined as the Fourier transform of the cross-covariance between the pair of channels. Motivated by the desire to encode the full cross-spectra into the covariance kernel, we design a novel kernel termed the cross-spectral mixture (CSM) kernel, which provides an intuitive representation of the power and phase dependencies between multiple outputs. The need for embedding the full cross-spectrum into the covariance kernel is illustrated by a recent surge in neuroscience research discovering that LFP interdependencies between regions exhibit phase synchrony patterns that are dependent on frequency band [11, 17, 18]. The remainder of the paper is organized as follows. Section 2 provides a summary of GP regression models for vector-valued data, and Section 3 introduces the SM, SM-LMC, and novel CSM covariance kernels. In Section 4, the CSM kernel is incorporated in a Bayesian hidden Markov model (HMM) [14] with a GP emission distribution as a demonstration of its utility in hierarchical modeling. Section 5 provides details on inverting the Bayesian HMM with variational inference, as well as details on a fast, novel GP fitting process that approximates the CSM kernel by its representation in the spectral domain. Section 6 analyzes the performance of this approximation and presents results for the CSM kernel in the neuroscience application, considering measured multi-region LFP data from the brain of a mouse. We conclude in Section 7 by discussing how this novel kernel can trivially be extended to any time-series application where GPs and the cross-spectrum are of interest. 2 Review of Multi-Output Gaussian Process Regression A multi-output regression task estimates samples from C output channels, yn = [yn1, . . . , ynC]T corresponding to the n-th input point xn (e.g., the n-th temporal sample). An unobserved latent function f(x) = [f1(x), . . . , fC(x)]T is responsible for generating the observations, such that yn ∼ N(f(xn), H−1), where H = diag(η1, . . . , ηC) is the precision of additive Gaussian noise. A GP prior on the latent function is formalized by f(x) ∼GP(m(x), K(x, x′)) for arbitrary input x, where the mean function m(x) ∈RC is set to equal 0 without loss of generality, and the covariance function (K(x, x′))c,c′ = kc,c′(x, x′) = cov(fc(x), fc′(x′)) creates dependencies between observations at input points x and x′, as observed on channels c and c′. In general, the input space x could be vector valued, but for simplicity we here assume it to be scalar, consistent with our motivating neuroscience application in which x corresponds to time. A convenient representation for multi-output kernel functions is to separate the kernel into the product of a kernel for the input space and a kernel for the interactions between the outputs. This is known as a separable kernel. A sum of separable kernels (SoS) representation [2] is given by kc,c′(x, x′) = Q X q=1 bq(c, c′)kq(x, x′), or K(x, x′) = Q X q=1 Bqkq(x, x′), (1) where kq(x, x′) is the input space kernel for component q, bq(c, c′) is the q-th output interaction kernel, and Bq ∈RC×C is a positive semi-definite output kernel matrix. Note that we have a discrete set of C output spaces, c ∈{1, . . . , C}, where the input space x is continuous, and discretely sampled arbitrarily in experiments. The SoS formulation is also known as the linear model of coregionalization (LMC) [10] and Bq is termed the coregionalization matrix. When Q = 1, the LMC reduces to the intrinsic coregionalization model (ICM) [2], and when rank(Bq) is restricted to equal 1, the LMC reduces to the semiparametric latent factor model (SLFM) [19]. 2 Any finite number of latent functional evaluations f = [f1(x), . . . , fC(x)]T at locations x = [x1, . . . , xN]T has a multivariate normal distribution N(f; 0, K), such that K is formed through the block partitioning K = k1,1(x, x) · · · k1,C(x, x) ... ... ... kC,1(x, x) · · · kC,C(x, x) = Q X q=1 Bq ⊗kq(x, x), (2) where each kc,c′(x, x) is an N × N matrix and ⊗symbolizes the Kronecker product. A vector-valued dataset consists of observations y = vec([y1, . . . , yN]T ) ∈RCN at the respective locations x = [x1, . . . , xN]T such that the first N elements of y are from channel 1 up to the last N elements belonging to channel C. Since both the likelihood p(y|f, x) and distribution over latent functions p(f|x) are Gaussian, the marginal likelihood is conveniently represented by p(y|x) = Z p(y|f, x)p(f|x)df = N(0, Γ), Γ = K + H−1 ⊗IN, (3) where all possible functions f have been marginalized out. Each input-space covariance kernel is defined by a set of hyperparameters, θ. This conditioning was removed for notational simplicity, but will henceforth be included in the notation. For example, if the squared exponential kernel is used, then kSE(x, x′; θ) = exp(−1 2||x −x′||2/ℓ2), defined by a single hyperparameter θ = {ℓ}. To fit a GP to the dataset, the hyperparameters are typically chosen to maximize the marginal likelihood in (3) via gradient ascent. 3 Expressive Kernels in the Spectral Domain This section first introduces the spectral mixture (SM) kernel [24] as well as a multi-output extension of the SM kernel within the LMC framework. While the SM-LMC model is capable of representing complex spectral relationships between channels, it does not intuitively model the cross-phase spectrum between channels. We propose a novel kernel known as the cross-spectral mixture (CSM) kernel that provides both the cross-amplitude and cross-phase spectra of multi-channel observations. Detailed derivations of each of these kernels is found in the Supplemental Material. 3.1 The Spectral Mixture Kernel A spectral Gaussian (SG) kernel is defined by an amplitude spectrum with a single Gaussian distribution reflected about the origin, SSG(ω; θ) = 1 2 [N(ω; −µ, ν) + N(ω; µ, ν)] , (4) where θ = {µ, ν} are the kernel hyperparameters, µ represents the peak frequency, and the variance ν is a scale parameter that controls the spread of the spectrum around µ. This spectrum is a function of angular frequency. The Fourier transform of (4) results in the stationary, positive definite autocovariance function kSG(τ; θ) = exp(−1 2ντ 2) cos(µτ), (5) where stationarity implies dependence on input domain differences k(τ; θ) = k(x, x′; θ) with τ = x −x′. The SG kernel may also be derived by considering a latent signal f(x) = √ 2 cos(ω(x + φ)) with frequency uncertainty ω ∼N(µ, ν) and phase offset ωφ. The kernel is the auto-covariance function for f(x), such that kSG(τ; θ) = cov(f(x), f(x+τ)). When computing the auto-covariance, the frequency ω is marginalized out, providing the kernel in (5) that includes all frequencies in the spectral domain with probability 1. A weighted, linear combination of SG kernels gives the spectral mixture (SM) kernel [24], kSM(τ; θ) = Q X q=1 aqkSG(τ; θq), SSM(ω; θ) = Q X q=1 aqSSG(ω; θq), (6) where θq = {aq, νq, µq} and θ = {θq} has 3Q degrees of freedom. The SM kernel may be derived as the Fourier transform of the spectral density SSM(ω; θ) or as the auto-covariance of latent functions f(x) = PQ q=1 p2aq cos(ωq(x + φq)) with uncertainty in angular frequency ωq ∼N(µq, νq). 3 Time 0 0.2 0.4 0.6 0.8 1 -4 -2 0 2 4 f 1(x) f 2(x) Time 0 0.2 0.4 0.6 0.8 1 -4 -2 0 2 4 f 1(x) f 2(x) Frequency 3 3.5 4 4.5 5 5.5 6 Amplitude 0 0 1 2 3 Phase -3.14 -1.57 0 1.57 3.14 Figure 1: Latent functions drawn for two channels f1(x) (blue) and f2(x) (red) using the CSM kernel (left) and rank-1 SM-LMC kernel (center). The functions are comprised of two SG components centered at 4 and 5 Hz. For the CSM kernel, we set the phase shift ψc′,2 = π. Right: the cross-amplitude (purple) and cross-phase (green) spectra between f1(x) and f2(x) are shown for the CSM kernel (solid) and SM-LMC kernel (dashed). The ability to tune phase relationships is beneficial for kernel design and interpretation. The moniker for the SM kernel in (6) reflects the mixture of Gaussian components that define the spectral density of the kernel. The SM kernel is able to represent any stationary covariance kernel given large enough Q; to name a few, this includes any combination of squared exponential, Mat`ern, rational quadratic, or periodic kernels [9, 16, 24]. 3.2 The Cross-Spectral Mixture Kernel A multi-output version of the SM kernel uses the SG kernel directly within the LMC framework: KSM-LMC(τ; θ) = Q X q=1 BqkSG(τ; θq), (7) where Q SG kernels are shared among the outputs via the coregionalization matrices {Bq}Q q=1. A generalized, non-stationary version of this SM-LMC kernel was proposed in [23] using the Gaussian process regression network (GPRN) [26]. The marginal distribution for any single channel is simply a Gaussian process with a SM covariance kernel. While this formulation is capable of providing a full cross-amplitude spectrum between two channels, it contains no information about a crossphase spectrum. Specifically, each channel is merely a weighted sum of P q Rq latent functions where Rq = rank(Bq). Whereas these functions are shared exactly across channels, our novel CSM kernel shares phase-shifted versions of these latent functions across channels. Definition 3.1. The cross-spectral mixture (CSM) kernel takes the form kc,c′ CSM(τ; θ) = Q X q=1 Rq X r=1 q arcqar c′q exp(−1 2νqτ 2) cos µq τ + φr c′q −φr cq , (8) where θ = {νq, µq, {ar q, φr q, φr 1q ≜0}Rq r=1}Q q=1 has 2Q + PQ q=1 Rq(2C −1) degrees of freedom, and ar cq and φr cq respectively represent the amplitude and shift in the input space for latent functions associated with channel c. In the LMC framework, the CSM kernel is KCSM(τ; θ) = Re ( Q X q=1 BqekSG(τ; θq) ) , Bq = Rq X r=1 βr q(βr q)†, ekSG(τ; θq) = exp(−1 2νqτ 2 + jµqτ), βr cq = parcq exp(−jψr cq), where ekSG(τ, θq) is phasor notation of the SG kernel, Bq is rank-Rq, {βr cq} are complex scalar coefficients encoding amplitude and phase, and ψr cq ≜µqφr cq is an alternative phase representation. We use complex notation where j = √−1, Re{·} returns the real component of its argument, and β† represents the complex conjugate of β. Both the CSM and SM-LMC kernels force the marginal distribution of data from a single channel to be a Gaussian process with a SM covariance kernel. The CSM kernel is derived in the Supplemental Material by considering functions represented by phase-shifted sinusoidal signals, fc(x) = PQ q=1 PRq r=1 p2arcq cos(ωr q(x + φr cq)), where each ωr q iid ∼N(µq, νq). Computing the cross-covariance function cov(fc(x), fc′(x + τ)) provides the CSM kernel. A comparison between draws from Gaussian processes with CSM and SM-LMC kernels is shown in Figure 1. The utility of the CSM kernel is clearly illustrated by its ability to encode phase 4 information, as well as its powerful functional form of the full cross-spectrum (both amplitude and phase). The amplitude function Ac,c′(ω) and phase function Φc,c′(ω) are obtained by representing the cross-spectrum in phasor notation, i.e., Γc,c′(ω; Θ) = P q(Bq)c,c′SSG(ω; θq) = Ac,c′(ω) exp(jΦc,c′(ω)). Interestingly, while the CSM and SM-LMC kernels have identical marginal amplitude spectra for shared {µq, νq, aq}, their cross-amplitude spectra differ due to the inherent destructive interference of the CSM kernel (see Figure 1, right). 4 Multi-Channel HMM Analysis Neuroscientists are interested in examining how the network structure of the brain changes as animals undergo a task, or various levels of arousal [15]. The LFP signal is a modality that allows researchers to explore this network structure. In the model provided in this section, we cluster segments of the LFP signal into discrete “brain states” [21]. Each brain state is represented by a unique cross-spectrum provided by the CSM kernel. The use of the full cross-spectrum to define brain states is supported by previous work discovering that 1) the power spectral density of LFP signals indicate various levels of arousal states in mice [7, 21], and 2) frequency-dependent phase synchrony patterns change as animals undergo different conditions in a task [11, 17, 18] (see Figure 2). The vector-valued observations from C channels are segmented into W contiguous, non-overlapping windows. The windows are common across channels, such that the C-channel data for window w ∈{1, . . . , W} are represented by yw n = [yw n1, . . . , yw nC]T at sample location xw n . Given data, each window consists of Nw temporal samples, but the model is defined for any set of sample locations. We model the observations {yw n } as emissions from a hidden Markov model (HMM) with L hidden, discrete states. State assignments are represented by latent variables ζw ∈{1, . . . , L} for each window w ∈{1, . . . , W}. In general, L is a set upper bound of the number of states (brain states [21], or “clusters”), but the model can shrink down and infer the number of states needed to fit the data. This is achieved by defining the dynamics of the latent states according to a Bayesian HMM [14]: ζ1 ∼Categorical(ρ0), ζw ∼Categorical(ρζw−1) ∀w ≥2, ρ0, ρℓ∼Dirichlet(ν), where the initial state assignment is drawn from a categorical distribution with probability vector ρ0 and all subsequent states assignments are drawn from the transition vector ρζw−1. Here, ρℓh is the probability of transitioning from state ℓto state h. The vectors {ρ0, ρ1, . . . , ρL} are independently drawn from symmetric Dirichlet distributions centered around ν = [1/L, . . . , 1/L] to impose sparsity on transition probabilities. In effect, this allows the model to learn the number of states needed for the data (i.e., fewer than L) [3]. Each cluster ℓ∈{1, . . . , L} is assigned GP parameters θℓ. The latent cluster assignment ζw for window w indicates which set of GP parameters control the emission distribution of the HMM: yw n ∼N(f w(xw n ), H−1 ζw ), f w(x) ∼GP(0, K(x, x′; θζw)), (9) where (K(x, x′; θℓ))c,c′ = kc,c′ CSM(x, x′; θℓ) is the CSM kernel, and the cluster-dependent precision Hζw = diag(ηζw) generates independent Gaussian observation noise. In this way, each window w is modeled as a stochastic process with a multi-channel cross-spectrum defined by θζw. Time (sec) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Potential Raw LFP Data BLA IL Cortex Frequency ( Hz) 0 2 4 6 8 10 12 14 16 Amplitude Cross-Amplitude Spectrum Frequency ( Hz) 0 2 4 6 8 10 12 14 16 Lag (rad) -1.5 -1 -0.5 0 0.5 1 Cross-Phase Spectrum BETA Waves ALPHA Waves THETA Waves DELTA Waves Figure 2: A short segment of LFP data recorded from the basolateral amygdala and infralimbic cortex is shown on the left. The cross-amplitude and phase spectra are produced using Welch’s averaged periodogram method [22] for several consecutive 5 second windows of LFP data. Frequency dependent phase synchrony lags are consistently present in the cross-phase spectrum, motivating the CSM kernel. This frequency dependency aligns with preconcieved notions of bands, or brain waves (e.g., 8-12 Hz alpha waves). 5 5 Inference A convenient notation vectorizes all observations within a window, yw = vec([yw 1 , . . . , yw Nw]T ), where vec(A) is the vectorization of matrix A; i.e., the first Nw elements of yw are observations from channel 1, up to the last Nw elements of yw belonging to channel C. Because samples are obtained on an evenly spaced temporal grid, we fix Nw = N and align relative sample locations within a window to an oracle xw = x = [x1, . . . , xN]T for all w. The model in Section 4 generates the set of observations Y = {yw}W w=1 at aligned sample locations x given kernel hyperparameters Θ = {θℓ, ηℓ}L ℓ=1 and model variables Ω= {{ρℓ}L ℓ=0, {ζw}W w=1}. The latent variables Ωare inverted using mean-field variational inference [3], obtaining an approximate posterior distribution q(Ω) = q(ζ1:W ) QL ℓ=0 Dir(ρℓ; αℓ). The approximate posterior is chosen to minimize the KL divergence to the true posterior distribution p(Ω|Y , Θ, x) using the standard variational EM method detailed in Chapter 3 of [3]. During each iteration of the variational EM algorithm, the kernel hyperparameters Θ are chosen to maximize the expected marginal log-likelihood Q = PW w=1 PL ℓ=1 q(ζw = ℓ) log N(yw; 0, Γℓ)via gradient ascent, where q(ζw = ℓ) is the marginal posterior probability that window w is assigned to brain state ℓ, and Γℓ= Re{eΓℓ} is the CSM kernel matrix for state ℓwith the complex form eΓℓ= P q Bℓ q ⊗ekSG(x, x; θℓ) + H−1 ℓ ⊗IN. Performing gradient ascent requires the derivatives ∂Q ∂Θj = 1 2 P w,ℓtr((αℓwαT ℓw −Γ−1 ℓ) ∂Γℓ ∂Θj ) where αℓw = Γ−1 ℓyw [16]. A na¨ıve implementation of this gradient requires the inversion of Γℓ, which has complexity O(N 3C3) and storage requirements O(N 2C2) since a simple method to invert a sum of Kronecker products does not exist. A common trick for GPs with evenly spaced samples (e.g., a temporal grid) is to use the discrete Fourier transform (DFT) to approximate the inverse of Γℓby viewing this as an approximately circulant matrix [5, 12]. These methods can speed up inference because circulant matrices are diagonalizable by the DFT coefficient matrix. Adjusting these methods to the multi-output formulation, we show how the DFT of the marginal covariance matrices retains the cross-spectrum information. Proposition 5.1. Let yw ∼N(0, Γζw) represent the marginal likelihood of circularly-symmetric [8] real-valued observations in window w, and denote the concatenation of the DFT of each channel as zw = (IC ⊗U)†yw where U is the N × N unitary DFT matrix. Then, zw is shown in the Supplemental Material to have the complex normal distribution [8]: zw ∼CN(0, 2Sζw), Sℓ= δ−1 Q X q=1 Bℓ q ⊗W ℓ q + H−1 ℓ ⊗IN, (10) where δ = xi+1 −xi for all i = 2, . . . , N, and W ℓ q ≈diag([SSG(ω; θℓq), 0]) is approximately diagonal. The spectral density SSG(ω; θ) = [SSG(ω1; θ), . . . , SSG(ω⌊N+1 2 ⌋; θ)] is found via (4) at angular frequencies ω = 2π Nδ 0, 1, . . . , N 2 , and 0 = [0, . . . , 0] is a row vector of N−1 2 zeros. The hyperparameters of the CSM kernels Θ may now be optimized from the expected marginal log-likelihood of Z = {zw}W w=1 instead of Y . Conceptually, the only difference during the fitting process is that, with the latter, derivatives of the covariance kernel are used, while, with the former, derivatives of the power spectral density are used. Computationally, this method improves the na¨ıve O(N 3C3) complexity of fitting the standard CSM kernel to O(NC3) complexity. Memory requirements are also reduced from O(N 2C2) to O(NC2). The reason for this improvement is that Sℓis now represented as N independent C × C blocks, reducing the inversion of Sℓto inverting a permuted block-diagonal matrix. 6 Experiments Section 6.1 demonstrates the performance of the CSM kernel and the accuracy of the DFT approximation In Section 6.2, the DFT approximation for the CSM kernel is used in a Bayesian HMM framework to cluster time-varying multi-channel LFP data based on the full cross-spectrum; the HMM states here correspond to states of the brain during LFP recording. 6 Series Length (seconds) 0 1 2 3 4 5 6 7 8 KL Divergence 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 7 = 0.5 Hz 7 = 1 Hz 7 = 3 Hz Figure 3: Time-series data is drawn from a Gaussian process with a known CSM covariance kernel, where the domain restricted to a fixed number of seconds. A Gaussian process is then fitted to this data using the DFT approximation. The KL-divergence of the fitted marginal likelihood from the true marginal likelihood is shown. Table 1: The mean and standard deviation of the difference between the AIC value of a given model and the AIC value of the rank-2 CSM model. Lower values are better. Rank Model ∆AIC 1 SE-LMC 4770 (993) 1 SM-LMC 512 (190) 1 CSM 109 (110) 2 SE-LMC 5180 (1120) 2 SM-LMC 325 (167) 2 CSM 0 (0) 3 SE-LMC 5550 (1240) 3 SM-LMC 412 (184) 3 CSM 204 (71.7) 6.1 Performance and Inference Analysis The performance of the CSM kernel is compared to the SM-LMC kernel and SE-LMC (squared exponential) kernel. Each of these models allow Q=20, and the rank of the coregionalization matrices is varied from rank-1 to rank-3. For a given rank, the CSM kernel always obtains the largest marginal likelihood for a window of LFP data, and the marginal likelihood always increases for increasing rank. To penalize the number of kernel parameters (e.g., a rank-3, Q=20 CSM kernel for 7 channels has 827 free parameters to optimize), the Akaike information criterion (AIC) is used for model selection [1]. For this reason, we do not test rank greater than 3. Table 1 shows that a rank-2 CSM kernel is selected using this criterion, followed by a rank-1 CSM kernel. To show the rank-2 CSM kernel is consistently selected as the preferred model we report means and standard deviations of AIC value differences across 30 different randomly selected 3-second windows of LFP data. Next, we provide numerical results for the conditions required when using the DFT approximation in (10). This allows for one to define details of a particular application in order to determine if the DFT approximation to the CSM kernel is appropriate. A CSM kernel is defined for two outputs with a single Gaussian component, Q = 1. The mean frequency and variance for this component are set to push the limits of the application. For example, with LFP data, low frequency content is of interest, namely greater than 1 Hz; therefore, we test values of eµ1 ∈{ 1 2, 1, 3} Hz. We anticipate variances at these frequencies to be around eν1 = 1 Hz2. A conversion to angular frequency gives µ1 = 2πeµ1 and ν1 = 4π2eν1. The covariance matrix Γ in (3) is formed using these parameters, a fixed noise variance, and N observations on a time grid with sampling rate of 200 Hz. Data y are drawn from the marginal likelihood with covariance Γ. A new CSM kernel is fit to y using the DFT approximation, providing an estimate ˆΓ. The KL divergence of the fitted marginal likelihood from the true marginal likelihood is KL(p(y|ˆΓ)||p(y|Γ)) = 1 2 " log |Γ| |ˆΓ| −N + tr(Γ−1ˆΓ) # , where | · | and tr(·) are the determinant and trace operators, respectively. Computing 1 N KL(p(y|ˆΓ)||p(y|Γ)) for various values of eµ1 and N provides the results in Figure 3. This plot shows that the DFT approximation struggles to resolve low frequency components unless the series length is sufficiently long. Due to the approximation error, when using the DFT approximation on LFP data we a priori filter out frequencies below 1.5 Hz and perform analyses with a series length of 3 seconds. This ensures the DFT approximation represents the true covariance matrix. The following application of the CSM kernel uses these settings. 6.2 Including the CSM Kernel in a Bayesian Hierarchical Model We analyze 12 hours of LFP data of a mouse transitioning between different stages of sleep [7, 21]. Observations were recorded simultaneously from 4 channels [6], high-pass filtered at 1.5 Hz, and subsampled to 200 Hz. Using 3 second windows provides N = 600 and W = 14, 400. The HMM was implemented with the number of kernel components Q = 15 and the number of states L = 7. 7 0 1 3 5 6 Amplitude BasalAmy 0 1 3 5 6 Amplitude DLS 0 1 3 5 6 Amplitude DMS 0 5 10 15 0 1 3 5 6 Amplitude Frequency 0 5 10 15 Frequency 0 5 10 15 Frequency 0 5 10 15 DHipp Frequency 0 5 10 15 0 5 10 15 0 5 10 15 −3.14 −1.57 0 1.57 3.14 Phase −3.14 −1.57 0 1.57 3.14 Phase −3.14 −1.57 0 1.57 3.14 Phase 0 5 10 15 −3.14 −1.57 0 1.57 3.14 Phase 0 5 10 15 0 0.5 1 1.5 2 2.5 Frequency (Hz) Amplitude 0 5 10 15 −3 −2 −1 0 1 2 3 Frequency (Hz) Phase Minutes 0 20 40 60 80 100 120 140 160 Dzirasa et al. CSM Kernel State 1 State 2 State 3 State 4 State 5 State 6 State 7 Figure 4: A subset of results from the Bayesian HMM analysis of brain states. In the upper left, the full crossspectrum for an arbitrary state (state 7) is plotted. In the upper right, the amplitude (top) and phase (bottom) functions for the cross-spectrum between the Dorsomedial Striatum (DMS) and Hippocampus (DHipp) are shown for all seven states. On the bottom, the maximum likelihood state assignments are shown and compared to the state assignments from [7]. The same colors between the CSM state assignments and the phase and amplitude functions correspond to the same state. These colors are alligned to the [7] states, but there is no explicit relationship between the colors of the two state sequences. This was chosen because sleep staging tasks categorize as many as seven states: various levels of rapid eye movement, slow wave sleep, and wake [20]. Although rigorous model selection on L is necessary to draw scientific conclusions from the results, the purpose of this experiment is to illustrate the utility of the CSM kernel in this application. An illustrative subset of the results are shown in Figure 4. The full cross-spectrum is shown for a single state (state 7), and the cross-spectrum between the Dorsomedial Striatum and the Dorsal Hippocampus are shown for all states. Furthermore, we show the progression of these brain state assignments over 3 hours and compare them to states from the method of [7], where statistics of the Hippocampus spectral density were clustered in an ad hoc fashion. To the best of our knowledge, this method represents the most relevant and accurate results for sleep staging from LFP signals in the neuroscience literature. From these results, it is apparent that our clusters pick up sub-states of [7]. For instance, states 3, 6, and 7 all appear with high probability when the method from [7] infers state 3. Observing the cross-phase function of sub-state 7 reveals striking differences from other states in the theta wave (4-7 Hz) and the alpha wave (8-15 Hz). This cross-phase function is nearly identical for states 2 and 5, implying that significant differences in the cross-amplitude spectrum may have played a role in identifying the difference between these two brain states. Many more of these interesting details exist due to the expressive nature of the CSM kernel. As a full interpretation of the cross-spectrum results is not the focus of this work, we contend that the CSM kernel has the potential to have a tremendous impact in fields such as neuroscience, where the dynamics of cross-spectrum relationships of LFP signals are of great interest. 7 Conclusion This work introduces the cross-spectral mixture kernel as an expressive kernel capable of extracting patterns for multi-channel observations. Combined with the powerful nonparametric representation of a Gaussian process, the CSM kernel expresses a functional form for every pairwise cross-spectrum between channels. This is a novel approach that merges Gaussian processes in the machine learning community to standard signal processing techniques. We believe the CSM kernel has the potential to impact a broad array of disciplines since the kernel can trivially be extended to any time-series application where Gaussian processes and the cross-spectrum are of interest. Acknowledgments The research reported here was funded in part by ARO, DARPA, DOE, NGA and ONR. 8 References [1] H. Akaike. A new look at the statistical model identification. IEEE Transactions on Automatic Control, 19(6):716–723, 1974. [2] M. A. Alvarez, L. Rosasco, and N. D. Lawrence. Kernels for vector-valued functions: a review. Foundations and Trends in Machine Learning, 4(3):195–266, 2012. [3] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, University College London. [4] R. Caruana. Multitask learning. Machine Learning, 28(1):41–75, 1997. [5] C. R. Dietrich and G. N. Newsam. Fast and exact simulation of stationary Gaussian processes through circulant embedding of the covariance matrix. SIAM Journal on Scientific Computing, 18(4):1088–1107, 1997. [6] K. Dzirasa, R. Fuentes, S. Kumar, J. M. Potes, and M. A. L. Nicolelis. Chronic in vivo multi-circuit neurophysiological recordings in mice. Journal of Neuroscience Methods, 195(1):36–46, 2011. [7] K. Dzirasa, S. Ribeiro, R. Costa, L. M. Santos, S. C. Lin, A. Grosmark, T. D. Sotnikova, R. R. Gainetdinov, M. G. Caron, and M. A. L. Nicolelis. Dopaminergic control of sleep–wake states. The Journal of Neuroscience, 26(41):10577–10589, 2006. [8] R. G. Gallager. Principles of digital communication. pages 229–232, 2008. [9] M. G¨onen and E. Alpaydn. Multiple kernel learning algorithms. JMLR, 12:2211–2268, 2011. [10] P. Goovaerts. Geostatistics for Natural Resources Evaluation. Oxford University Press, 1997. [11] G. G. Gregoriou, S. J. Gotts, H. Zhou, and R. Desimone. High-frequency, long-range coupling between prefrontal and visual cortex during attention. Science, 324(5931):1207–1210, 2009. [12] M. L´azaro-Gredilla, J. Qui˜nonero Candela, C. E. Rasmussen, and A. R. Figueiras-Vidal. Sparse spectrum Gaussian process regression. JMLR, (11):1865–1881, 2010. [13] J. R. Lloyd, D. Duvenaud, R. Grosse, J. B. Tenenbaum, and Z. Ghahramani. Automatic construction and natural-language description of nonparametric regression models. AAAI, 2014. [14] D. J. C. MacKay. Ensemble learning for hidden Markov models. Technical report, 1997. [15] D. Pfaff, A. Ribeiro, J. Matthews, and L. Kow. Concepts and mechanisms of generalized central nervous system arousal. ANYAS, 2008. [16] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. 2006. [17] P. Sauseng and W. Klimesch. What does phase information of oscillatory brain activity tell us about cognitive processes? Neuroscience and Biobehavioral Reviews, 32:1001–1013, 2008. [18] C. M. Sweeney-Reed, T. Zaehle, J. Voges, F. C. Schmitt, L. Buentjen, K. Kopitzki, C. Esslinger, H. Hinrichs, H. J. Heinze, R. T. Knight, and A. Richardson-Klavehn. Corticothalamic phase synchrony and cross-frequency coupling predict human memory formation. eLIFE, 2014. [19] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. AISTATS, 10:333–340, 2005. [20] M. A. Tucker, Y. Hirota, E. J. Wamsley, H. Lau, A. Chaklader, and W. Fishbein. A daytime nap containing solely non-REM sleep enhances declarative but not procedural memory. Neurobiology of Learning and Memory, 86(2):241–7, 2006. [21] K. Ulrich, D. E. Carlson, W. Lian, J. S. Borg, K. Dzirasa, and L. Carin. Analysis of brain states from multi-region LFP time-series. NIPS, 2014. [22] P. D. Welch. The use of fast Fourier transform for the estimation of power spectra: A method based on time averaging over short, modified periodograms. IEEE Transactions on Audio and Electroacoustics, 15(2):70–73, 1967. [23] A. G. Wilson. Covariance kernels for fast automatic pattern discovery and extrapolation with Gaussian processes. PhD thesis, University of Cambridge, 2014. [24] A. G. Wilson and R. P. Adams. Gaussian process kernels for pattern discovery and extrapolation. ICML, 2013. [25] A. G. Wilson, E. Gilboa, A. Nehorai, and J. P. Cunningham. Fast kernel learning for multidimensional pattern extrapolation. NIPS, 2014. [26] A. G. Wilson and D. A. Knowles. Gaussian process regression networks. ICML, 2012. [27] Z. Yang, A. J. Smola, L. Song, and A. G. Wilson. ´A la carte – learning fast kernels. AISTATS, 2015. 9 | 2015 | 63 |
5,954 | A Framework for Individualizing Predictions of Disease Trajectories by Exploiting Multi-Resolution Structure Peter Schulam Dept. of Computer Science Johns Hopkins University Baltimore, MD 21218 pschulam@jhu.edu Suchi Saria Dept. of Computer Science Johns Hopkins University Baltimore, MD 21218 ssaria@cs.jhu.edu Abstract For many complex diseases, there is a wide variety of ways in which an individual can manifest the disease. The challenge of personalized medicine is to develop tools that can accurately predict the trajectory of an individual’s disease, which can in turn enable clinicians to optimize treatments. We represent an individual’s disease trajectory as a continuous-valued continuous-time function describing the severity of the disease over time. We propose a hierarchical latent variable model that individualizes predictions of disease trajectories. This model shares statistical strength across observations at different resolutions–the population, subpopulation and the individual level. We describe an algorithm for learning population and subpopulation parameters offline, and an online procedure for dynamically learning individual-specific parameters. Finally, we validate our model on the task of predicting the course of interstitial lung disease, a leading cause of death among patients with the autoimmune disease scleroderma. We compare our approach against state-of-the-art and demonstrate significant improvements in predictive accuracy. 1 Introduction In complex, chronic diseases such as autism, lupus, and Parkinson’s, the way the disease manifests may vary greatly across individuals [1]. For example, in scleroderma, the disease we use as a running example in this work, individuals may be affected across six organ systems—the lungs, heart, skin, gastrointestinal tract, kidneys, and vasculature—to varying extents [2]. For any single organ system, some individuals may show rapid decline throughout the course of their disease, while others may show early decline but stabilize later on. Often in such diseases, the most effective drugs have strong side-effects. With tools that can accurately predict an individual’s disease activity trajectory, clinicians can more aggressively treat those at greatest risk early, rather than waiting until the disease progresses to a high level of severity. To monitor the disease, physicians use clinical markers to quantify severity. In scleroderma, for example, PFVC is a clinical marker used to measure lung severity. The task of individualized prediction of disease activity trajectories is that of using an individual’s clinical history to predict the future course of a clinical marker; in other words, the goal is to predict a function representing a trajectory that is updated dynamically using an individual’s previous markers and individual characteristics. Predicting disease activity trajectories presents a number of challenges. First, there are multiple latent factors that cause heterogeneity across individuals. One such factor is the underlying biological mechanism driving the disease. For example, two different genetic mutations may trigger distinct disease trajectories (e.g. as in Figures 1a and 1b). If we could divide individuals into groups according to their mechanisms—or disease subtypes (see e.g. [3, 4, 5, 6])—it would be straightforward to fit separate models to each subpopulation. In most complex diseases, however, the mechanisms are poorly understood and clear definitions of subtypes do not exist. If subtype alone determined trajectory, then we could cluster individuals. However, other unobserved individual-specific factors 1 such as behavior and prior exposures affect health and can cause different trajectories across individuals of the same subtype. For instance, a chronic smoker will typically have unhealthy lungs and so may have a trajectory that is consistently lower than a non-smoker’s, which we must account for using individual-specific parameters. An individual’s trajectory may also be influenced by transient factors—e.g. an infection unrelated to the disease that makes it difficult to breath (similar to the “dips” in Figure 1c or the third row in Figure 1d). This can cause marker values to temporarily drop, and may be hard to distinguish from disease activity. We show that these factors can be arranged in a hierarchy (population, subpopulation, and individual), but that not all levels of the hierarchy are observed. Finally, the functional outcome is a rich target, and therefore more challenging to model than scalar outcomes. In addition, the marker data is observed in continuous-time and is irregularly sampled, making commonly used discrete-time approaches to time series modeling (or approaches that rely on imputation) not well suited to this domain. Related work. The majority of predictive models in medicine explain variability in the target outcome by conditioning on observed risk factors alone. However, these do not account for latent sources of variability such as those discussed above. Further, these models are typically crosssectional—they use features from data measured up until the current time to predict a clinical marker or outcome at a fixed point in the future. As an example, consider the mortality prediction model by Lee et al. [7], where logistic regression is used to integrate features into a prediction about the probability of death within 30 days for a given patient. To predict the outcome at multiple time points, typically separate models are trained. Moreover, these models use data from a fixed-size window, rather than a growing history. Researchers in the statistics and machine learning communities have proposed solutions that address a number of these limitations. Most related to our work is that by Rizopoulos [8], where the focus is on making dynamical predictions about a time-to-event outcome (e.g. time until death). Their model updates predictions over time using all previously observed values of a longitudinally recorded marker. Besides conditioning on observed factors, they account for latent heterogeneity across individuals by allowing for individual-specific adjustments to the population-level model— e.g. for a longitudinal marker, deviations from the population baseline are modeled using random effects by sampling individual-specific intercepts from a common distribution. Other closely related work by Proust-Lima et al. [9] tackle a similar problem as Rizopoulos, but address heterogeneity using a mixture model. Another common approach to dynamical predictions is to use Markov models such as order-p autoregressive models (AR-p), HMMs, state space models, and dynamic Bayesian networks (see e.g. in [10]). Although such models naturally make dynamic predictions using the full history by forward-filtering, they typically assume discrete, regularly-spaced observation times. Gaussian processes (GPs) are a commonly used alternative for handling continuous-time observations—see Roberts et al. [11] for a recent review of GP time series models. Since Gaussian processes are nonparametric generative models of functions, they naturally produce functional predictions dynamically by using the posterior predictive conditioned on the observed data. Mixtures of GPs have been applied to model heterogeneity in the covariance structure across time series (e.g. [12]), however as noted in Roberts et al., appropriate mean functions are critical for accurate forecasts using GPs. In our work, an individual’s trajectory is expressed as a GP with a highly structured mean comprising population, subpopulation and individual-level components where some components are observed and others require inference. More broadly, multi-level models have been applied in many fields to model heterogeneous collections of units that are organized within a hierarchy [13]. For example, in predicting student grades over time, individuals within a school may have parameters sampled from the school-level model, and the school-level model parameters in turn may be sampled from a county-specific model. In our setting, the hierarchical structure—which individuals belong to the same subgroup—is not known a priori. Similar ideas are studied in multi-task learning, where relationships between distinct prediction tasks are used to encourage similar parameters. This has been applied to modeling trajectories by treating predictions at each time point as a separate task and enforcing similarity between submodels close in time [14]. This approach is limited, however, in that it models a finite number of times. Others, more recently, have developed models for disease trajectories (see [15, 16] and references within) but these focus on retrospective analysis to discover disease etiology rather than dynamical prediction. Schulam et al. [16] incorporate differences in trajectories due to subtypes and individual-specific factors. We build upon this work here. Finally, recommender systems also share 2 Subtype marginal model coefficients zi fi M ↵ yij tij Ni ~βg G ~bi ⌃b ~⇢i Population model features Population model coefficients Population model features-to-coefficient map Subtype B-spline coefficients Subtype indicator 2 {1, . . . , G} Individual model covariance matrix2 Rd`⇥d` Individual model coefficients2 Rd` Structured noise GP hyper-parameters Structured noise function 2 RR ~xiz Subtype marginal model features ~wg 2 Rqz 2 Rqz 2 Rdz 2 Rqp 2 Rdp 2 Rdp⇥qp ~xip ⇤ (d) (a) (b) (c) (e) Figure 1: Plots (a-c) show example marker trajectories. Plot (d) shows adjustments to a population and subpopulation fit (row 1). Row 2 makes an individual-specific long-term adjustment. Row 3 makes shortterm structured noise adjustments. Plot (e) shows the proposed graphical model. Levels in the hierarchy are color-coded. Model parameters are enclosed in dashed circles. Observed random variables are shaded. information across individuals with the aim of tailoring predictions (see e.g. [17, 18, 19]), but the task is otherwise distinct from ours. Contributions. We propose a hierarchical model of disease activity trajectories that directly addresses common—latent and observed—sources of heterogeneity in complex, chronic diseases using three levels: the population level, subpopulation level, and individual level. The model discovers the subpopulation structure automatically, and infers individual-level structure over time when making predictions. In addition, we include a Gaussian process as a model of structured noise, which is designed to explain away temporary sources of variability that are unrelated to disease activity. Together, these four components allow individual trajectories to be highly heterogeneous while simultaneously sharing statistical strength across observations at different “resolutions” of the data. When making predictions for a given individual, we use Bayesian inference to dynamically update our posterior belief over individual-specific parameters given the clinical history and use the posterior predictive to produce a trajectory estimate. Finally, we evaluate our approach by developing a state-of-the-art trajectory prediction tool for lung disease in scleroderma. We train our model using a large, national dataset containing individuals with scleroderma tracked over 20 years and compare our predictions against alternative approaches. We find that our approach yields significant gains in predictive accuracy of disease activity trajectories. 2 Disease Trajectory Model We describe a hierarchical model of an individual’s clinical marker values. The graphical model is shown in Figure 1e. For each individual i, we use Ni to denote the number of observed markers. We denote each individual observation using yij and its measurement time using tij where j ∈{1, . . . , Ni}. We use ⃗yi ∈RNi and ⃗ti ∈RNi to denote all of individual i’s marker values and measurement times respectively. In the following discussion, Φ(tij) denotes a column-vector containing a basis expansion of the time tij and we use Φ ⃗ti = [Φ(ti1), . . . , Φ(tiNi)]⊤to denote the matrix containing the basis expansion of points in ⃗ti in each of its rows. We model the jth marker value for individual i as a normally distributed random variable with a mean assumed to be the sum of four terms: a population component, a subpopulation component, an individual component, and a structured noise component: yij ∼N Φp(tij)⊤Λ ⃗xip | {z } (A) population + Φz(tij)⊤⃗βzi | {z } (B) subpopulation + Φℓ(tij)⊤⃗bi | {z } (C) individual + fi(tij) | {z } (D) structured noise , σ2 . (1) The four terms in the sum serve two purposes. First, they allow for a number of different sources of variation to influence the observed marker value, which allows for heterogeneity both across and within individuals. Second, they share statistical strength across different subsets of observations. The population component shares strength across all observations. The subpopulation component 3 shares strength across observations belonging to subgroups of individuals. The individual component shares strength across all observations belonging to the same individual. Finally, the structured noise shares information across observations belonging to the same individual that are measured at similar times. Predicting an individual’s trajectory involves estimating her subtype and individualspecific parameters as new clinical data becomes available1. We describe each of the components in detail below. Population level. The population model predicts aspects of an individual’s disease activity trajectory using observed baseline characteristics (e.g. gender and race), which are represented using the feature vector ⃗xip. This sub-model is shown within the orange box in Figure 1e. Here we assume that this component is a linear model where the coefficients are a function of the features ⃗xip ∈Rqp. The predicted value of the jth marker of individual i measured at time tij is shown in Eq. 1 (A), where Φp (t) ∈Rdp is a basis expansion of the observation time and Λ ∈Rdp×qp is a matrix used as a linear map from an individual’s covariates ⃗xip to coefficients ρi ∈Rdp. At this level, individuals with similar covariates will have similar coefficients. The matrix Λ is learned offline. Subpopulation level. We model an individual’s subtype using a discrete-valued latent variable zi ∈{1, . . . , G}, where G is the number of subtypes. We associate each subtype with a unique disease activity trajectory represented using B-splines, where the number and location of the knots and the degree of the polynomial pieces are fixed prior to learning. These hyper-parameters determine a basis expansion Φz(t) ∈Rdz mapping a time t to the B-spline basis function values at that time. Trajectories for each subtype are parameterized by a vector of coefficients ⃗βg ∈Rdz for g ∈{1, . . . , G}, which are learned offline. Under subtype zi, the predicted value of marker yij measured at time tij is shown in Eq. 1 (B). This component explains differences such as those observed between the trajectories in Figures 1a and 1b. In many cases, features at baseline may be predictive of subtype. For example, in scleroderma, the types of antibody an individual produces (i.e. the presence of certain proteins in the blood) are correlated with certain trajectories. We can improve predictive performance by conditioning on baseline covariates to infer the subtype. To do this, we use a multinomial logistic regression to define feature-dependent marginal probabilities: zi | ⃗xiz ∼Mult (π1:G (⃗xiz)), where πg (⃗xiz) ∝e ⃗w⊤ g ⃗xiz. We denote the weights of the multinomial regression using ⃗w1:G, where the weights of the first class are constrained to be ⃗0 to ensure model identifiability. The remaining weights are learned offline. Individual level. This level models deviations from the population and subpopulation models using parameters that are learned dynamically as the individual’s clinical history grows. Here, we parameterize the individual component using a linear model with basis expansion Φℓ(t) ∈Rdℓand individual-specific coefficients ⃗bi ∈Rdℓ. An individual’s coefficients are modeled as latent variables with marginal distribution ⃗bi ∼N(⃗0, Σb). For individual i, the predicted value of marker yij measured at time tij is shown in Eq. 1 (C). This component can explain, for example, differences in overall health due to an unobserved characteristic such as chronic smoking, which may cause atypically lower lung function than what is predicted by the population and subpopulation components. Such an adjustment is illustrated across the first and second rows of Figure 1d. Structured noise. Finally, the structured noise component fi captures transient trends. For example, an infection may cause an individual’s lung function to temporarily appear more restricted than it actually is, which may cause short-term trends like those shown in Figure 1c and the third row of Figure 1d. We treat fi as a function-valued latent variable and model it using a Gaussian process with zero-valued mean function and Ornstein-Uhlenbeck (OU) covariance function: KOU(t1, t2) = a2 exp −ℓ−1|t1 −t2| . The amplitude a controls the magnitude of the structured noise that we expect to see and the length-scale ℓcontrols the length of time over which we expect these temporary trends to occur. The OU kernel is ideal for modeling such deviations as it is both mean-reverting and draws from the corresponding stochastic process are only first-order continuous, which eliminates long-range dependencies between deviations [20]. Applications in other domains may require different kernel structures motivated by properties of the noise in the trajectories. 1The model focuses on predicting the long-term trajectory of an individual when left untreated. In many chronic conditions, as is the case for scleroderma, drugs only provide short-term relief (accounted for in our model by the individual-specific adjustments). If treatments that alter long-term course are available and commonly prescribed, then these should be included within the model as an additional component that influences the trajectory. 4 2.1 Learning Objective function. To learn the parameters of our model Θ = {Λ, ⃗w1:G, ⃗β1:G, Σb, a, ℓ, σ2}, we maximize the observed-data log-likelihood (i.e. the probability of all individual’s marker values ⃗yi given measurement times ⃗ti and features {⃗xip, ⃗xiz}). This requires marginalizing over the latent variables {zi,⃗bi, fi} for each individual. This yields a mixture of multivariate normals: P (⃗yi | Xi, Θ) = G X zi=1 πzi (⃗xiz) N ⃗yi | Φp ⃗ti Λ ⃗xip + Φz ⃗ti ⃗βzi, K ⃗ti,⃗ti , (2) where K(t1, t2) = Φℓ(t1)⊤ΣbΦℓ(t2) + KOU(t1, t2) + σ2I(t1 = t2). The observed-data log-likelihood for all individuals is therefore: L (Θ) = PM i=1 log P (⃗yi | Xi, Θ). A more detailed derivation is provided in the supplement. Optimizing the objective. To maximize the observed-data log-likelihood with respect to Θ, we partition the parameters into two subsets. The first subset, Θ1 = {Σb, α, ℓ, σ2}, contains values that parameterize the covariance function K(t1, t2) above. As is often done when designing the kernel of a Gaussian process, we use a combination of domain knowledge to choose candidate values and model selection using observed-data log-likelihood as a criterion for choosing among candidates [20]. The second subset, Θ2 = {Λ, ⃗w1:G, ⃗β1:G}, contains values that parameterize the mean of the multivariate normal distribution in Equation 2. We learn these parameters using expectation maximization (EM) to find a local maximum of the observed-data log-likelihood. Expectation step. All parameters related to ⃗bi and fi are limited to the covariance kernel and are not optimized using EM. We therefore only need to consider the subtype indicators zi as unobserved in the expectation step. Because zi is discrete, its posterior is computed by normalizing the joint probability of zi and ⃗yi. Let π∗ ig denote the posterior probability that individual i has subtype g ∈{1, . . . , G}, then we have π∗ ig ∝πg (⃗xiz) N ⃗yi | Φp ⃗ti Λ ⃗xip + Φz ⃗ti ⃗βg, K ⃗ti,⃗ti . (3) Maximization step. In the maximization step, we optimize the marginal probability of the soft assignments under the multinomial logistic regression model with respect to ⃗w1:G using gradientbased methods. To optimize the expected complete-data log-likelihood with respect to Λ and ⃗β1:G, we note that the mean of the multivariate normal for each individual is a linear function of these parameters. Holding Λ fixed, we can therefore solve for ⃗β1:G in closed form and vice versa. We use a block coordinate ascent approach, alternating between solving for Λ and ⃗β1:G until convergence. Because the expected complete-data log-likelihood is concave with respect to all parameters in Θ2, each maximization step is guaranteed to converge. We provide additional details in the supplement. 2.2 Prediction Our prediction ˆy(t′ i) for the value of the trajectory at time t′ i is the expectation of the marker y′ i under the posterior predictive conditioned on observed markers ⃗yi measured at times ⃗ti thus far. This requires evaluating the following expression: ˆy (t′ i) = G X zi=1 Z Rdℓ Z RNi E h y′ i | zi,⃗bi, fi, t′ i i | {z } prediction given latent vars. P zi,⃗bi, fi | ⃗yi, Xi, Θ | {z } posterior over latent vars. dfi d⃗bi (4) = E∗ zi,⃗bi,fi h Φp (t′ i)⊤Λ ⃗xip + Φz (t′ i)⊤⃗βzi + Φℓ(t′ i)⊤⃗bi + fi (t′ i) i (5) = Φp (t′ i)⊤Λ ⃗xip | {z } population prediction + Φz (t′ i)⊤ ⃗β∗ i (Eq. 7) z }| { E∗ zi h ⃗βzi i | {z } subpopulation prediction + Φℓ(t′ i)⊤ ⃗b∗ i (Eq. 10) z }| { E∗ ⃗bi h ⃗bi i | {z } individual prediction + f ∗ i (t′ i) (Eq. 12) z }| { E∗ fi [fi (t′ i)] | {z } structured noise prediction , (6) where E∗denotes an expectation conditioned on ⃗yi, Xi, Θ. In moving from Eq. 4 to 5, we have written the integral as an expectation and substituted the inner expectation with the mean of the normal distribution in Eq. 1. From Eq. 5 to 6, we use linearity of expectation. Eqs. 7, 10, and 12 5 1 2 4 ●●●● ● ●● ●● ●● ●●●●● ●●●● ● ●● ●● ●● ●●●●● ●●●● ● ●● ●● ●● ●●●●● 40 60 80 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 1 2 4 ● ● ● ● ●● ●●●●● ● ● ● ● ● ● ● ● ● ● ●● ●●●●● ● ● ● ● ● ● ● ● ● ● ●● ●●●●● ● ● ● ● ● ● 20 40 60 80 100 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 1 2 4 ●●●● ● ●● ●● ●● ●●●●● ●●●● ● ●● ●● ●● ●●●●● ●●●● ● ●● ●● ●● ●●●●● 40 60 80 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 Pr( ) = Pr( ) = 0.57 0.18 Pr( ) = Pr( ) = 0.71 0.21 Pr( ) = Pr( ) = 0.60 0.39 1 2 4 ● ● ● ● ●● ●●●●● ● ● ● ● ● ● ● ● ● ● ●● ●●●●● ● ● ● ● ● ● ● ● ● ● ●● ●●●●● ● ● ● ● ● ● 20 40 60 80 100 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 0.0 2.5 5.0 7.5 10.0 Pr( ) = Pr( ) = 0.53 0.26 Pr( ) = Pr( ) = 0.54 0.46 Pr( ) = Pr( ) = 0.99 0.01 Pr( ) = Pr( ) = 0.56 0.40 Pr( ) = Pr( ) = 0.54 0.46 Years Since First Seen (a) (b) (c) (d) Figure 2: Plots (a) and (c) show dynamic predictions using the proposed model for two individuals. Red markers are unobserved. Blue shows the trajectory predicted using the most likely subtype, and green shows the second most likely. Plot (b) shows dynamic predictions using the B-spline GP baseline. Plot (d) shows predictions made using the proposed model without individual-specific adjustments. below show how the expectations in Eq. 6 are computed. An expanded version of these steps are provided in the supplement. Computing the population prediction is straightforward as all quantities are observed. To compute the subpopulation prediction, we need to compute the marginal posterior over zi, which we used in the expectation step above (Eq. 3). The expected subtype coefficients are therefore ⃗β∗ i ≜ PG zi=1 π∗ izi ⃗βzi . (7) To compute the individual prediction, note that by conditioning on zi, the integral over the likelihood with respect to fi and the prior over⃗bi form the likelihood and prior of a Bayesian linear regression. Let Kf = KOU(⃗ti,⃗ti) + σ2I, then the posterior over⃗bi conditioned on zi is: P ⃗bi | zi, ⃗yi, Xi, Θ ∝N ⃗bi | 0, Σb N ⃗yi | ΦpΛ ⃗xip + Φz ⃗ti ⃗βzi + Φℓ ⃗ti ⃗bi, Kf . (8) Just as in Eq. 2, we have integrated over fi moving its effect from the mean of the normal distribution to the covariance. Because the prior over⃗bi is conjugate to the likelihood on the right side of Eq. 8, the posterior can be written in closed form as a normal distribution (see e.g. [10]). The mean of the left side of Eq. 8 is therefore h Σ−1 b + Φℓ(⃗ti)⊤K−1 f Φℓ(⃗ti) i−1 h Φℓ(⃗ti)⊤K−1 f ⃗yi −Φp(⃗ti)Λ ⃗xip −Φz(⃗ti)⃗βzi i , (9) To compute the unconditional posterior mean of ⃗bi we take the expectation of Eq. 9 with respect to the posterior over zi. Eq. 9 is linear in ⃗βzi, so we can directly replace ⃗βzi with its mean (Eq. 7): ⃗b∗ i ≜ h Σ−1 b + Φℓ(⃗ti)⊤K−1 f Φℓ(⃗ti) i−1 h Φℓ(⃗ti)⊤K−1 f ⃗yi −Φp(⃗ti)Λ ⃗xip −Φz(⃗ti)⃗β∗ i i . (10) Finally, to compute the structured noise prediction, note that conditioned on zi and ⃗bi, the GP prior and marker likelihood (Eq. 1) form a standard GP regression (see e.g. [20]). The conditional posterior of fi(t′ i) is therefore a GP with mean KOU(t′ i,⃗ti) KOU(⃗ti,⃗ti) + σ2I −1 ⃗yi −Φp(⃗ti)Λ ⃗xip −Φz(⃗ti)⃗βzi −Φℓ(⃗ti)⃗bi . (11) To compute the unconditional posterior expectation of fi(t′ i), we note that the expression above is linear in zi and⃗bi and so their expectations can be plugged in to obtain f ∗(t′ i) ≜KOU(t′ i,⃗ti) KOU(⃗ti,⃗ti) + σ2I −1 ⃗yi −Φp(⃗ti)Λ ⃗xip −Φz(⃗ti)⃗β∗ i −Φℓ(⃗ti)⃗b∗ i . (12) 6 3 Experiments We demonstrate our approach by building a tool to predict the lung disease trajectories of individuals with scleroderma. Lung disease is currently the leading cause of death among scleroderma patients, and is notoriously difficult to treat because there are few predictors of decline and there is tremendous variability across individual trajectories [21]. Clinicians track lung severity using percent of predicted forced vital capacity (PFVC), which is expected to drop as the disease progresses. In addition, demographic variables and molecular test results are often available at baseline to aid prognoses. We train and validate our model using data from the Johns Hopkins Scleroderma Center patient registry, which is one of the largest in the world. To select individuals from the registry, we used the following criteria. First, we include individuals who were seen at the clinic within two years of their earliest scleroderma-related symptom. Second, we exclude all individuals with fewer than two PFVC measurements after their first visit. Finally, we exclude individuals who received a lung transplant. The dataset contains 672 individuals and a total of 4, 992 PFVC measurements. For the population model, we use constant functions (i.e. observed covariates adjust an individual’s intercept). The population covariates (⃗xip) are gender, African American race, and indicators of ACA and Scl-70 antibodies—two proteins believed to be connected to scleroderma-related lung disease. Note that all features are binary. For the subpopulation B-splines, we set boundary knots at 0 and 25 years (the maximum observation time in our data set is 23 years), use two interior knots that divide the time period from 0-25 years into three equally spaced chunks, and use quadratics as the piecewise components. These B-spline hyperparameters (knots and polynomial degree) are also used for all baseline models. We select G = 9 subtypes using BIC. The covariates in the subtype marginal model (⃗xiz) are the same used in the population model. For the individual model, we use linear functions. For the hyper-parameters Θ1 = {Σb, α, ℓ, σ2} we set Σb to be a diagonal covariance matrix with entries [16, 10−2] along the diagonal, which correspond to intercept and slope variances respectively. Finally, we set α = 6, ℓ= 2, and σ2 = 1 using domain knowledge; we expect transient deviations to last around 2 years and to change PFVC by around ±6 units. Baselines. First, to compare against typical approaches used in clinical medicine that condition on baseline covariates only (e.g. [22]), we fit a regression model conditioned on all covariates included in ⃗xiz above. The mean is parameterized using B-spline bases (Φ(t)) as: ˆy | ⃗xiz = Φ(t)⊤ ⃗β0 + P xi in ⃗xiz xi⃗βi + P xi,xj in pairs of ⃗xiz xixj ⃗βij . (13) The second baseline is similar to [8] and [23] and extends the first baseline by accounting for individual-specific heterogeneity. The model has a mean function identical to the first baseline and individualizes predictions using a GP with the same kernel as in Equation 2 (using hyper-parameters as above). Another natural approach is to explain heterogeneity by using a mixture model similar to [9]. However, a mixture model cannot adequately explain away individual-specific sources of variability that are unrelated to subtype and therefore fails to recover subtypes that capture canonical trajectories (we discuss this in detail in the supplemental section). The recovered subtypes from the full model do not suffer from this issue. To make the comparison fair and to understand the extent to which the individual-specific component contributes towards personalizing predictions, we create a mixture model (Proposed w/ no personalization) where the subtypes are fixed to be the same as those in the full model and the remaining parameters are learned. Note that this version does not contain the individual-specific component. Evaluation. We make predictions after one, two, and four years of follow-up. Errors are summarized within four disjoint time periods: (1, 2], (2, 4], (4, 8], and (8, 25] years2. To measure error, we use the absolute difference between the prediction and a smoothed version of the individual’s observed trajectory. We estimate mean absolute error (MAE) using 10-fold CV at the level of individuals (i.e. all of an individual’s data is held-out), and test for statistically significant reductions in error using a one-sided, paired t-test. For all models, we use the MAP estimate of the individual’s trajectory. In the models that include subtypes, this means that we choose the trajectory predicted by the most likely subtype under the posterior. Although this discards information from the posterior, in our experience clinicians find this choice to be more interpretable. Qualitative results. In Figure 2 we present dynamically updated predictions for two patients (one per row, dynamic updates move left to right). Blue lines indicate the prediction under the most likely subtype and green lines indicate the prediction under the second most likely. The first individual 2After the eighth year, data becomes too sparse to further divide this time span. 7 Predictions using 1 year of data Model (1, 2] % Im. (2, 4] % Im. (4, 8] % Im. (8, 25] % Im. B-spline with Baseline Feats. 12.78 12.73 12.40 12.14 B-spline + GP 5.49 7.70 9.67 10.71 Proposed 5.26 ∗7.04 8.6 10.17 12.12 Proposed w/ no personalization 6.17 7.12 9.38 12.85 Predictions using 2 years of data B-spline with Baseline Feats. 12.73 12.40 12.14 B-spline + GP 5.88 8.65 10.02 Proposed ∗5.48 6.8 ∗7.95 8.1 9.53 Proposed w/ no personalization 6.00 8.12 11.39 Predictions using 4 years of data B-spline with Baseline Feats. 12.40 12.14 B-spline + GP 6.00 8.88 Proposed ∗5.14 14.3 ∗7.58 14.3 Proposed w/ no personalization 5.75 9.16 Table 1: MAE of PFVC predictions for the two baselines and the proposed model. Bold numbers indicate best performance across models (∗is stat. significant). “% Im.” reports percent improvement over next best. (Figure 2a) is a 50-year-old, white woman with Scl-70 antibodies, which are thought to be associated with active lung disease. Within the first year, her disease seems stable, and the model predicts this course with 57% confidence. After another year of data, the model shifts 21% of its belief to a rapidly declining trajectory; likely in part due to the sudden dip in year 2. We contrast this with the behavior of the B-spline GP shown in Figure 2b, which has limited capacity to express individualized long-term behavior. We see that the model does not adequately adjust in light of the downward trend between years one and two. To illustrate the value of including individual-specific adjustments, we now turn to Figures 2c and 2d (which plot predictions made by the proposed model with and without personalization respectively). This individual is a 60-year-old, white man that is Scl-70 negative, which makes declining lung function less likely. Both models use the same set of subtypes, but whereas the model without individual-specific adjustment does not consider the recovering subtype to be likely until after year two, the full model shifts the recovering subtype trajectory downward towards the man’s initial PFVC value and identify the correct trajectory using a single year of data. Quantitative results. Table 1 reports MAE for the baselines and the proposed model. We note that after observing two or more years of data, our model’s errors are smaller than the two baselines (and statistically significantly so in all but one comparison). Although the B-spline GP improves over the first baseline, these results suggest that both subpopulation and individual-specific components enable more accurate predictions of an individual’s future course as more data are observed. Moreover, by comparing the proposed model with and without personalization, we see that subtypes alone are not sufficient and that individual-specific adjustments are critical. These improvements also have clinical significance. For example, individuals who drop by more than 10 PFVC are candidates for aggressive immunosuppressive therapy. Out of the 7.5% of individuals in our data who decline by more than 10 PFVC, our model predicts such a decline at twice the true-positive rate of the B-spline GP (31% vs. 17%) and with a lower false-positive rate (81% vs. 90%). 4 Conclusion We have described a hierarchical model for making individualized predictions of disease activity trajectories that accounts for both latent and observed sources of heterogeneity. We empirically demonstrated that using all elements of the proposed hierarchy allows our model to dynamically personalize predictions and reduce error as more data about an individual is collected. Although our analysis focused on scleroderma, our approach is more broadly applicable to other complex, heterogeneous diseases [1]. Examples of such diseases include asthma [3], autism [4], and COPD [5]. There are several promising directions for further developing the ideas presented here. First, we observed that predictions are less accurate early in the disease course when little data is available to learn the individual-specific adjustments. To address this shortcoming, it may be possible to leverage time-dependent covariates in addition to the baseline covariates used here. Second, the quality of our predictions depends upon the allowed types of individual-specific adjustments encoded in the model. More sophisticated models of individual variation may further improve performance. Moreover, approaches for automatically learning the class of possible adjustments would make it possible to apply our approach to new diseases more quickly. 8 References [1] J. Craig. Complex diseases: Research and applications. Nature Education, 1(1):184, 2008. [2] J. Varga, C.P. Denton, and F.M. Wigley. Scleroderma: From Pathogenesis to Comprehensive Management. Springer Science & Business Media, 2012. [3] J. L¨otvall et al. Asthma endotypes: a new approach to classification of disease entities within the asthma syndrome. Journal of Allergy and Clinical Immunology, 127(2):355–360, 2011. [4] L.D. Wiggins, D.L. Robins, L.B. Adamson, R. Bakeman, and C.C. Henrich. Support for a dimensional view of autism spectrum disorders in toddlers. Journal of autism and developmental disorders, 42(2):191– 200, 2012. [5] P.J. Castaldi et al. Cluster analysis in the copdgene study identifies subtypes of smokers with distinct patterns of airway disease and emphysema. Thorax, 2014. [6] S. Saria and A. Goldenberg. Subtyping: What Is It and Its Role in Precision Medicine. IEEE Intelligent Systems, 30, 2015. [7] D.S. Lee, P.C. Austin, J.L. Rouleau, P.P Liu, D. Naimark, and J.V. Tu. Predicting mortality among patients hospitalized for heart failure: derivation and validation of a clinical model. Jama, 290(19):2581–2587, 2003. [8] D. Rizopoulos. Dynamic predictions and prospective accuracy in joint models for longitudinal and timeto-event data. Biometrics, 67(3):819–829, 2011. [9] C. Proust-Lima et al. Joint latent class models for longitudinal and time-to-event data: A review. Statistical Methods in Medical Research, 23(1):74–90, 2014. [10] K.P. Murphy. Machine learning: a probabilistic perspective. MIT press, 2012. [11] S. Roberts, M. Osborne, M. Ebden, S. Reece, N. Gibson, and S. Aigrain. Gaussian processes for timeseries modelling. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 371(1984):20110550, 2013. [12] J.Q. Shi, R. Murray-Smith, and D.M. Titterington. Hierarchical gaussian process mixtures for regression. Statistics and computing, 15(1):31–41, 2005. [13] A. Gelman and J. Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2006. [14] H. Wang et al. High-order multi-task feature learning to identify longitudinal phenotypic markers for alzheimer’s disease progression prediction. In Advances in Neural Information Processing Systems, pages 1277–1285, 2012. [15] J. Ross and J. Dy. Nonparametric mixture of gaussian processes with constraints. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1346–1354, 2013. [16] P.F. Schulam, F.M. Wigley, and S. Saria. Clustering longitudinal clinical marker trajectories from electronic health data: Applications to phenotyping and endotype discovery. In Proceedings of the TwintyNinth AAAI Conference on Artificial Intelligence, 2015. [17] B.M. Marlin. Modeling user rating profiles for collaborative filtering. In Advances in neural information processing systems, 2003. [18] G. Adomavicius and A. Tuzhilin. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. Knowledge and Data Engineering, IEEE Transactions on, 17(6):734–749, 2005. [19] D. Sontag, K. Collins-Thompson, P.N. Bennett, R.W. White, S. Dumais, and B. Billerbeck. Probabilistic models for personalizing web search. In Proceedings of the fifth ACM international conference on Web search and data mining, pages 433–442. ACM, 2012. [20] C.E. Rasmussen and C.K. Williams. Gaussian Processes for Machine Learning. The MIT Press, 2006. [21] Y. Allanore et al. Systemic sclerosis. Nature Reviews Disease Primers, 2015. [22] D. Khanna et al. Clinical course of lung physiology in patients with scleroderma and interstitial lung disease: analysis of the scleroderma lung study placebo group. Arthritis & Rheumatism, 63(10):3078– 3085, 2011. [23] J.Q. Shi, B. Wang, E.J. Will, and R.M. West. Mixed-effects gaussian process functional regression models with application to dose–response curve prediction. Stat. Med., 31(26):3165–3177, 2012. 9 | 2015 | 64 |
5,955 | Local Smoothness in Variance Reduced Optimization Daniel Vainsencher, Han Liu Tong Zhang Dept. of Operations Research & Financial Engineering Dept. of Statistics Princeton University Rutgers University Princeton, NJ 08544 Piscataway, NJ, 08854 {daniel.vainsencher,han.liu}@princeton.edu tzhang@stat.rutgers.edu Abstract We propose a family of non-uniform sampling strategies to provably speed up a class of stochastic optimization algorithms with linear convergence including Stochastic Variance Reduced Gradient (SVRG) and Stochastic Dual Coordinate Ascent (SDCA). For a large family of penalized empirical risk minimization problems, our methods exploit data dependent local smoothness of the loss functions near the optimum, while maintaining convergence guarantees. Our bounds are the first to quantify the advantage gained from local smoothness which are significant for some problems significantly better. Empirically, we provide thorough numerical results to back up our theory. Additionally we present algorithms exploiting local smoothness in more aggressive ways, which perform even better in practice. 1 Introduction We consider minimization of functions of form P (w) = n−1 n X i=1 φi x⊤ i w + R (w) where the convex φi corresponds to a loss of w on some data xi, R is a convex regularizer and P is µ strongly convex, so that P (w′) ≥P (w) + ⟨w′ −w, ▽P (w)⟩+ µ 2 ∥w′ −w∥2. In addition, we assume each φi is smooth in general and near flat in some region; examples include SVM, regression with the absolute error or ε insensitive loss, smooth approximations of those, and also logistic regression. Stochastic optimization algorithms consider one loss φi at a time, chosen at random according to a distribution pt which may change over time. Recent algorithms combine φi with information about previously seen losses to accelerate the process, achieving linear convergence rate, including Stochastic Variance Reduced Gradient (SVRG) [2], Stochastic Averaged Gradient (SAG) [4], and Stochastic Dual Coordinate Ascent (SDCA) [6]. The expected number of iterations required by these algorithms is of form O (n + L/µ) log ε−1 where L is a Lipschitz constant of all loss gradients ▽φi, measuring their smoothness. Difficult problems, having a condition number L/µ much larger than n, are called ill conditioned, and have motivated the development of accelerated algorithms [5, 8, 3]. Some of these algorithms have been adapted to allow importance sampling where pt is non uniform; the effect on convergence bounds is to replace the uniform bound L described above by Lavg, the average over Li, loss specific Lipschitz bounds. In practice, for an important class of problems, a large proportion of φi need to be sampled only very few times, and others indefinitely. As an example we take an instance of smooth SVM, with µ = n−1 and L ≈30, solved via standard SDCA. In Figure 1 we observe the decay of an upper bound on the updates possible for different samples, where choosing a sample that is white produces no update. The large majority of the figure is white, indicating wasted effort. For 95% of losses, the algorithm captured all relevant information after just 3 visits. Since the non white zone is nearly constant over time, detecting and focusing on the few important losses should be possible. This 1 represents both a success of SDCA and a significant room for improvement, as focusing just half the effort on the active losses would increase effectiveness by a factor of 10. Similar phenomena occur under the SVRG and SAG algorithms as well. But is the phenomenon specific to a single problem, or general? for what problems can we expect the set of useful losses to be small and near constant? Figure 1: SDCA on smoothed SVM. Dual residuals upper bound the SDCA update size; white indicates zero hence wasted effort. The dual residuals quickly become sparse; the support is stable. Allowing pt to change over time, the phenomenon described indeed can be exploited; Figure 2 shows significant speedups obtained by our variants of SVRG and SDCA. Comparisons on other datasets are given in Section 4. The mechanism by which speed up is obtained is specific to each algorithm, but the underlying phenomenon we exploit is the same: many problems are much smoother locally than globally. First consider a single smoothed hinge loss φi, as used in smoothed SVM with smoothing parameter γ. The non-smoothness of the hinge loss is spread in φi over an interval of length γ, as illustrated in Figure 3 and given by φi (a) = 0 a > 1 1 −a −γ/2 a < 1 −γ (a −1)2 / (2γ) otherwise . The Lipschitz constant of d daφi (a) is γ−1, hence it enters into the global estimate of condition number Lavg as Li = ∥xi∥/γ; hence approximating the hinge loss more precisely, with a smaller γ, makes the problems strictly more ill conditioned. But outside that interval of length γ, φi can be locally approximated as affine, having a constant gradient; into a correct expression of local conditioning, say on interval B in the figure, it should contribute nothing. So smaller γ can sometimes make the problem (locally) better conditioned. A set I of losses having constant gradients over a subset of the hypothesis space can be summarized for purposes of optimization by a single affine 0 500 1000 1500 2000 2500 3000 3500 4000 Effective passes over data 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 Duality gap/suboptimality SVRG solving smoothed hinge loss SVM on MNIST 0/1. Loss gradient is 3.33e+01 Lip. smooth. 6.77e-05 strong convexity. Uniform sampling ([2]) Global smoothness sampling ([7]) Local SVRG (Alg. 1) Empirical Affinity SVRG (Alg. 4) 0 50 100 150 200 250 300 350 Effective passes over data 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 Duality gap/suboptimality SDCA solving smoothed hinge loss SVM on MNIST 0/1. Loss gradient is 3.33e+01 Lip. smooth. 6.77e-05 strong convexity. Uniform sampling ([6]) Global smoothness sampling ([10]) Affine-SDCA (Alg. 2) Empirical ¢ SDCA (Alg. 3) Figure 2: On the left we see variants of SVRG with η = 1/ (8L), on the right variants of SDCA. 2 Figure 3: A loss φi that is near flat (Hessian vanishes, near constant gradient) on a “ball” B ⊂R. B with radius 2r ∥xi∥is induced by the (Euclidean) ball of hypotheses B (wt, r), that we prove includes w∗. Then the loss φi does not contribute to curvature in the region of interest, and an affine model of the sum of such φi on B can replace sampling from them. We find r in algorithms by combining strong convexity with quantities such as duality gap or gradient norm. function, so sampling from I should not be necessary. It so happens that SAG, SVRG and SDCA naturally do such modeling, hence need only light modifications to realize significant gains. We provide the details for SVRG in Section 2 (the SAG case is similar) and for SDCA in Section 3. Other losses, while nowhere affine, are locally smooth: the logistic regression loss has gradients with local Lipschitz constants that decay exponentially with distance from a hyperplane dependent on xi. For such losses we cannot forgo sampling any φi permanently, but we can still obtain bounds benefitting from local smoothness for an SVRG variant. Next we define formally the relevant geometric properties of the optimization problem and relate them to provable convergence improvements over existing generic bounds; we give detailed bounds in the sequel. Throughout B (c, r) is a Euclidean ball of radius r around c. Definition 1. We shall denote Li,r = maxw∈B(w∗,r)
▽2φi x⊤ i ·
2 which is also the uniform Lipschitz coefficient of ▽φi that hold at distance at most r from w∗. Remark 2. Algorithms will use similar quantities not dependent on knowing w∗such as ˜Li,r around a known ˜w. Definition 3. We define the average ball smoothness function S : R →R of a problem by: S (r) = n X i=1 Li,∞/ n X i=1 Li,r. In Theorem 5 we see that Algorithm 1 requires fewer stochastic gradient samples to reduce loss suboptimality by a constant factor than SVRG with importance sampling according to global smoothness. Once it has certified that the optimum w∗is within r of the current iterate w0 it uses S (2r) times less stochastic gradient steps. The next measure similarly increases when many losses are affine on a ball around the optimum. Definition 4. We define the ball affinity function S : R →[0, n] of a problem by: A (r) = n−1 n X i=1 1{Li,r>0} !−1 . In Theorem 10 we see similarly that Algorithm 2 requires fewer accesses of φi to reduce the duality gap to any ε > 0 than SDCA with importance sampling according to global smoothness. Once it has certified that the optimum is within distance r of the current primal iterate w = w α0 it accesses A (2r) times fewer φi. In both cases, local smoothness and affinity enable us to focus a constant portion of sampling effort on the fewer losses still challenging near the optimum; when these are few, the ratios (and hence 3 algorithmic advantage) are large. We obtain these provable speedups over already fast algorithms by using that local smoothness which we can certify. For non smooth losses such as SVM and and absolute loss regression, we can similarly ignore irrelevant losses, leading to significant practical improvements; the current theory for such losses is insufficient to quantify the speed ups as we do for smooth losses. We obtain algorithms that are simpler and sometimes much faster by using the more qualitative observation that as iterates tend to an optimum, the set of relevant losses is generally stable and shrinking. Then algorithms can estimate the set of relevant losses directly from quantities observed in performing stochastic iterations, sidestepping the looseness of estimating r. There are two previous works in this general direction. The first paper work combining non-uniform sampling and empirical estimation of loss smoothness is [4]. They note excellent empirical performance on a variant of SAG, but without theory ensuring convergence. We provide similarly fast (and bound free) variants of SDCA (Section 3.2) and SVRG (Section 2.2). A dynamic importance sampling variant of SDCA was reported in [1] without relation to local smoothness; we discuss the connection in Section 3. 2 Local smoothness and gradient descent algorithms In this section we describe how SVRG, in contrast to the classical stochastic gradient descent (SGD), naturally exposes local smoothness in losses. Then we present two variants of SVRG that realize these gains. We begin by considering a single loss when close to the optimum and for simplicity assume R ≡0. Assume a small ball B = B (w, r) around our current estimate w includes around the optimum w∗, and B is contained in a flat region of φi, and this holds for a large proportion of the n losses. SGD and its descendent SVRG (with importance sampling) use updates of form wt+1 = wt − ηvt i/ (pin), where Ei∼pvt i/ (pin) = ▽F (wt) is an unbiased estimator of the full gradient of the loss term F (w) = n−1 Pn i=1 φi x⊤ i w . SVRG uses vt i = ▽φi x⊤ i wt −▽φi x⊤ i ˜w / (pin) + ▽F ( ˜w) where ˜w is some reference point, with the advantage that vt i has variance that vanishes as wt, ˜w → w∗. We point out in addition that when ˜w, wt ∈B and ▽φi x⊤ i · is constant on B the effects of sampling φi cancels out and vt i = ▽F ( ˜w). In particular, we can set pt i = 0 with no loss of information. More generally when ▽φi x⊤ i · is near constant on B (small Li,r) the difference between the sampled values of ▽φi in vt i is very small and pt i can be similarly small. We formalize this in the next section, where we localize existing theory that applied importance sampling to adapt SVRG statically to losses with varied global smoothness. 2.1 The Local SVRG algorithm Halving the suboptimality of a solution using SVRG has two parts: computing an exact gradient at a reference point, and performing many stochastic gradient descent steps. The sampling distribution, step size and number of iterations in the latter are determined by smoothness of the losses. Algorithm 1, Local-SVRG, replaces the global bounds on gradient change Li with local ones Li,r, made valid by restricting iterations to a small ball certified to contain the optimum. This allows us to leverage previous algorithms and analysis, maintaining previous guarantees and improving on them when S (r) is large. For this section we assume P = F; as in the initial version of SVRG [2], we may incorporate a smooth regularizer (though in a different way, explained later). This allows us to apply the existing Prox-SVRG algorithm [7] and its theory; instead of using the proximal operator for fixed regularization, we use it to localize (by projections) the stochastic descent to a ball B around the reference point ˜w see Algorithm 1. Then the theory developed around importance sampling and global smoothness applies to sharper local smoothness estimates that hold on B (ignoring φi which are affine on B is a special case). This allows for fewer stochastic iterations and using a larger stepsize, obtaining speedups that are problem dependent but often large in late stages; see Figure 2. This is formalized in the following theorem. 4 Algorithm 1 Local SVRG is an application of ProxSVRG with ˜w dependent regularization. This portion reduces suboptimality by a constant factor, apply iteratively to minimize loss. 1. Compute ˜v = ▽F ( ˜w) 2. Define r = 2 µ ∥˜v∥, R (w) = iB( ˜ w,r) = 0 w ∈B ( ˜w, r) ∞ otherwise (by µ strong convexity, w∗∈ B ( ˜w, r)) 3. For each i, compute ˜Li,r = maxw∈B( ˜ w,r) ▽2φi x⊤ i w 4. Define a probability distribution: pi ∝ ˜Li,r, weighted Lipschitz constant ˜Lp = maxi ˜Li,r/ (npi) and step size η = 1 16˜Lp . 5. Apply the inner loop of Prox-SVRG: (a) Set w0 = ˜w (b) For t ∈{1, . . . , m}: i. Choose it ∼p ii. Compute vt = ▽φit wt−1 −▽φit (˜x) / (npit) + ˜v iii. wt = proxηR wt−1 −ηvt (c) Return ˆw = m−1 P t∈[m] wt Theorem 5. Let ˜w be an initial solution such that ▽F ( ˜w) certifies that w∗∈B = B ( ˜w, r). Algorithm 1 finds ˆw with EF ( ˆw) −F (w∗) ≤(F ( ˜w) −F (w∗)) /2 using O (d (n + m)) time, where m = 128 µ n−1 Pn i=1 Li,2r + 3. Remark 6. In the difficult case that is ill conditioned even locally so that 128n−1 Pn i=1 Li,2r ≫nµ, the term n is negligible and the ratio between complexities of Algorithm 1 and an SVRG using global smoothness approaches S (2r). Proof. In the initial pass on the data, compute ▽F ( ˜w) , r and ˜Li,r ≤Li,2r. We then apply a single round of Algorithm Prox-SVRG of [7], with the regularizer R (x) = χB( ˜ w,r) localizing around the reference point. Then we may apply Theorem 1 of [7] with local ˜Li,r instead of the global Li required there for general proximal operators. This allows us to use the corresponding larger stepsize η = 1 16Lp = 1 16n−1 Pn i=1 ˜Li,r . Remark 7. The use of projections (hence the restriction to smooth regularization) is necessary because the local smoothness is restricted to B, and venturing outside B with a large step size may compromise convergence entirely. While excursions outside B are difficult to control in theory, in practice skipping the projection entirely does not seem to hurt convergence. Informally, stepping far from B requires moving consistently against ▽F, which is an unlikely event. Remark 8. The theory requires m stochastic steps per exact gradient to guarantee any improvement at all, but for ill conditioned problems this is often very pessimistic. In practice, the first O (n) stochastic steps after an exact gradient provide most of the benefit. In this heuristic scenario, the computational benefit of Theorem 5 is through the sampling distribution and the larger step size. Enlarging the step size without accompanying theory often gains a corresponding speed up to a certain precision but the risk of non convergence materializes frequently. While [2] incorporated a smooth R by adding it to every loss function, this could reduce the smoothness (increase ˜Li,r) inherent in the losses hence reducing the benefits of our approach. We instead propose to add a single loss function defined as nR; that this is not of form φi x⊤ i w poses no real difficulty because Local-SVRG depends on losses only through their gradients and smoothness. The main difficulty with the approach of this section is that in early stages r is large, in part because µ is often very small (µ = n−α for α ∈{0.5, 1} are common choices), leading to loose bounds 5 on ˜Li,r. In some cases the speed up is only obtained when the precision is already satisfactory; we consider a less conservative scheme in the next section. 2.2 The Empirical Affinity SVRG algorithm Local-SVRG relies on local smoothness to certify that some ∆t i =
▽φi x⊤ i wt −▽φi x⊤ i ˜w
are small. In contrast, Empirical Affinity SVRG (Algorithm 4) takes ∆t i > t to be evidence that a loss is active; when ∆t i = 0 several times, that is evidence of local affinity of the loss, hence it can be sampled less often. This strategy deemphasizes locally affine losses even when r is too large to certify it, thereby focuses work on the relevant losses much earlier. Half of the time we sample proportional to the global bounds Li which keeps estimates of ∆t i current, and also bounds the variance when some ∆t i increases from zero to positive. A benefit of using ∆t i is that it is observed at every sample of i without additional work. Pseudo code for the slightly long Algorithm 4 is in the supplementary material for space reasons. 3 Stochastic Dual Coordinate Ascent (SDCA) The SDCA algorithm solves P through the dual problem D (α) = −n−1 n X i=1 φ∗ i (−αi) + R∗(w (α)) where w (α) = ▽R∗ 1 λn Pn i=1 xiαi . At each iteration, SDCA chooses i at random according to pt, and updates the αi corresponding to the loss φi to increase D. This scheme has been used for particular losses before, and was analyzed in [6] obtaining linear rates for general smooth losses, uniform sampling and l2 regularization, and recently generalized in [10] to other regularizers and general sampling distributions. In particular, [10] show improved bounds and performance by statically adapting to the global smoothness properties of losses; using a distribution pi ∝1+Li (nµ)−1, it suffices to perform O n + Lavg µ log n + Lavg µ ε−1 iterations to obtain an expected duality gap of at most ε. While SDCA is very different from gradient descent methods, it shares the property that when the current state of the algorithm (in the form of αi) already matches the derivative information for φi, the update does not require φi and can be skipped. As we’ve seen in Figure 1, many losses converge αi →α∗ i very quickly; we will show that local affinity is a sufficient condition. 3.1 The Affine-SDCA algorithm The algorithmic approach for exploiting locally affine losses in SDCA is very different from that for gradient descent style algorithms; for some affine losses we certify early that some αi are in their final form (see Lemma 9) and henceforth ignore them. This applies only to locally affine (not just smooth) losses, but unlike Local-SVRG, does not require modifying the algorithm for explicit localization. We use a reduction to obtain improved rates while reusing the theory of [9] for the remaining points. These results are stated for squared Euclidean regularization, but hold for strongly convex R as in [10]. Lemma 9. Let wt = w (αt) ∈B (w∗, r), and let {gi} = S w∈B(wt,r) φ′ i x⊤ i w ; in other words, φi x⊤ i · is affine on B (wt, r) which includes w∗. Then we can compute the optimal value α∗ i = −gi. Proof. As stated in Section 7 of [6], for each i, we have −α∗ i = φ′ i x⊤ i w∗ . Then if φ′ i x⊤ i w is a constant singleton on B (wt, r) containing w∗, then in particular that is −α∗ i . The lemma enables Algorithm 2 to ignore a growing proportion of losses. The overall convergence this enables is given by the following. 6 Algorithm 2 Affine-SDCA: adapting to locally affine φi, with speedup approximately A (r). 1. α0 = 0 ∈Rn, I0 = ∅. 2. For τ ∈{1, . . . }: (a) ˜wτ = w α(τ−1)m ; Compute rτ = q 2 P ( ˜wτ) −D α(τ−1)m /µ (b) Compute Iτ = n i : S w∈B(wτ,r) φ′ i x⊤ i ˜wτ = 1 o (c) For i ∈Iτ\Iτ−1: α(τ−1)n i = −φ′ i x⊤ i ˜wτ (d) pτ i ∝ ( 0 i ∈Iτ 1 + Li (nµ)−1 otherwise, si = 0 i ∈Iτ s/pτ i otherwise (e) For t ∈[(τ −1) m + 1, τm]: i. Choose it ∼pτ ii. Compute ∆αt it = sit · φ′ it x⊤ itw (αt) −αt−1 it iii. αt j = ( αt−1 j + ∆αt j j = it αt−1 j otherwise Theorem 10. If at epoch τ Algorithm 2 is at duality gap ετ, it will achieve expected duality gap ε in at most n′ + A−1(2r)L′ avg µ log n′ + A−1(2r)L′ avg µ ετ ε iterations, where n′ = n −|Iτ| and L′ avg = n′−1 P i∈[n]\Iτ Li µ . Remark 11. Assuming Li = L for simplicity, and recalling A (2r) ≤n/n′, we find the number of iterations is reduced by a factor of at least A (2r), compared to using pi ∝1 + Li (nµ)−1. In contrast, the cost of the steps 2a to 2d added by Algorithm 2 is at most a factor of O ((m + n) /m), which may be driven towards one by the choice of m. Recent work [1] modified SDCA for dynamic importance sampling dependent on the so called dual residual: κi = αi + φ′ i x⊤ i w (α) (where by φ′ i (w) we refer to the derivative of φi at w) which is 0 at α∗. They exhibit practical improvement in convergence, especially for smooth SVM, and theoretical speed ups when κ is sparse (for an impractical version of the algorithm), but [1] does not tell us when this pre-condition holds, nor the magnitude of the expected benefit in terms of properties of the problem (as opposed to algorithm state such as κ). In the context of locally flat losses such as smooth SVM, we answer these questions through local smoothness: Lemma 9 shows κi tends to zero for losses that are locally affine on a ball around the optimum, and the practical Algorithm 2 realizes the benefit when this certification comes into play, as quantified in terms of A (r). 3.2 The Empirical ∆SDCA algorithm Algorithm 2 uses local affinity and a small duality gap to certify the optimality of some αi, avoiding calculating ∆αi that are zero or useless; naturally r is small enough only late in the process. Algorithm 3 instead dedicates half of samples in proportion to the magnitude of recent ∆αi (the other half chosen uniformly). As Figure 2 illustrates, this approach leads to significant speed up much earlier than the approach based on duality gap certification of local affinity. While we it is not clear that we can prove for Algorithm 3 a bound that strictly improves on Algorithm 2, it is worth noting that except for (probably rare) updates to i ∈Iτ, and a factor of 2, the empirical algorithm should quickly detect all locally affine losses hence obtain at least the speed up of the certifying algorithm. In addition, it naturally adapts to the expected small updates of locally smooth losses. Note that ∆αi is closely related to (and might be replacable by) κ, but the current algorithm differs significantly from those in [1] in how these quantities are used to guide sampling. 7 Algorithm 3 Empirical ∆SDCA 1. α0 = 0 ∈Rn, At i = 0. 2. For τ ∈{1, . . . }: (a) pτ = 0.5pτ,1 + 0.5p2 where pτ,1 i ∝A(τ−1)m i and p2 i = n−1 (b) For t ∈[(τ −1) m + 1, τm]: i. Choose it ∼pτ ii. Compute ∆αt it = sit · φ′ it x⊤ itw (αt) −αt−1 it iii. At j = ( 0.5At−1 j + 0.5 ∆αt j j = it At−1 j otherwise iv. αt j = ( αt−1 j + ∆αt j j = it αt−1 j otherwise 4 Empirical evaluation We applied the same algorithms with almost1 the same parameters to 4 additional classification datasets to demonstrate the impact of our algorithm variants more widely. The results for SDCA are in Figure 4, those for SVRG in Figure 5 in Section 7 in the supplementary material for lack of space. 0 100 200 300 400 Effective passes over data 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 Duality gap/suboptimality SDCA solving smoothed hinge loss SVM on Mushroom. Loss gradient is 3.33e+01 Lip. smooth. 1.23e-04 strong convexity. Uniform sampling ([6]) Global smoothness sampling ([10]) Affine-SDCA (Alg. 2) Empirical ¢ SDCA (Alg. 3) 0 20 40 60 80 Effective passes over data 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 Duality gap/suboptimality SDCA solving smoothed hinge loss SVM on w8a. Loss gradient is 3.33e+01 Lip. smooth. 2.01e-05 strong convexity. Uniform sampling ([6]) Global smoothness sampling ([10]) Affine-SDCA (Alg. 2) Empirical ¢ SDCA (Alg. 3) 0 5 10 15 20 25 30 Effective passes over data 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 Duality gap/suboptimality SDCA solving smoothed hinge loss SVM on Dorothea. Loss gradient is 3.33e+01 Lip. smooth. 1.25e-03 strong convexity. Uniform sampling ([6]) Global smoothness sampling ([10]) Affine-SDCA (Alg. 2) Empirical ¢ SDCA (Alg. 3) 0 100 200 300 400 500 Effective passes over data 10-13 10-12 10-11 10-10 10-9 10-8 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 Duality gap/suboptimality SDCA solving smoothed hinge loss SVM on ijcnn1. Loss gradient is 3.33e+01 Lip. smooth. 5.22e-06 strong convexity. Uniform sampling ([6]) Global smoothness sampling ([10]) Affine-SDCA (Alg. 2) Empirical ¢ SDCA (Alg. 3) Figure 4: SDCA variant results on four additional datasets. The advantages of using local smoothness are significant on the harder datasets. References [1] Dominik Csiba, Zheng Qu, and Peter Richt´arik. Stochastic dual coordinate ascent with adaptive probabilities. arXiv preprint arXiv:1502.08053, 2015. [2] Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems, pages 315–323, 2013. 1On one of the new datasets, SVRG with a ratio of step-size to Lavg more aggressive than theory suggests stopped converging; hence we changed all runs to use the permissible 1/8. No other parameters were changed adapted to the dataset. 8 [3] Qihang Lin, Zhaosong Lu, and Lin Xiao. An accelerated proximal coordinate gradient method and its application to regularized empirical risk minimization. arXiv preprint arXiv:1407.1296, 2014. [4] Mark Schmidt, Nicolas Le Roux, and Francis Bach. Minimizing finite sums with the stochastic average gradient. arXiv preprint arXiv:1309.2388, 2013. [5] Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, pages 1–41, 2013. [6] Shai Shalev-Shwartz and Tong Zhang. Stochastic dual coordinate ascent methods for regularized loss. The Journal of Machine Learning Research, 14(1):567–599, 2013. [7] Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4):2057–2075, 2014. [8] Yuchen Zhang and Lin Xiao. Stochastic primal-dual coordinate method for regularized empirical risk minimization. arXiv preprint arXiv:1409.3257, 2014. [9] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling. arXiv preprint arXiv:1401.2753, 2014. [10] Peilin Zhao and Tong Zhang. Stochastic optimization with importance sampling for regularized loss minimization. Proceedings of The 32nd International Conference on Machine Learning, 2015. 9 | 2015 | 65 |
5,956 | Unlocking neural population non-stationarity using a hierarchical dynamics model Mijung Park1, Gergo Bohner1, Jakob H. Macke2 1 Gatsby Computational Neuroscience Unit, University College London 2 Research Center caesar, an associate of the Max Planck Society, Bonn Max Planck Institute for Biological Cybernetics, Bernstein Center for Computational Neuroscience T¨ubingen {mijung, gbohner}@gatsby.ucl.ac.uk, jakob.macke@caesar.de Abstract Neural population activity often exhibits rich variability. This variability can arise from single-neuron stochasticity, neural dynamics on short time-scales, as well as from modulations of neural firing properties on long time-scales, often referred to as neural non-stationarity. To better understand the nature of co-variability in neural circuits and their impact on cortical information processing, we introduce a hierarchical dynamics model that is able to capture both slow inter-trial modulations in firing rates as well as neural population dynamics. We derive a Bayesian Laplace propagation algorithm for joint inference of parameters and population states. On neural population recordings from primary visual cortex, we demonstrate that our model provides a better account of the structure of neural firing than stationary dynamics models. 1 Introduction Neural spiking activity recorded from populations of cortical neurons can exhibit substantial variability in response to repeated presentations of a sensory stimulus [1]. This variability is thought to arise both from dynamics generated endogenously within the circuit [2] as well as from variations in internal and behavioural states [3, 4, 5, 6, 7]. An understanding of how the interplay between sensory inputs and endogenous dynamics shapes neural activity patterns is essential for our understanding of how information is processed by neuronal populations. Multiple statistical [8, 9, 10, 11, 12, 13] and mechanistic [14] models for characterising neuronal population dynamics have been developed. In addition to these dynamics which take place on fast time-scales (milliseconds up to few seconds), there are also processes modulating neural firing activity which take place on much slower timescales (seconds to hours). Slow drifts in rates across an experiment can be caused by fluctuations in arousal, anaesthesia level or other physiological properties of the experimental preparation [15, 16, 17]. Furthermore, processes such as learning and short-term plasticity can lead to slow changes in neural firing properties [18]. The statistical structure of these slow fluctuations has been modelled using state-space models and related techniques [19, 20, 21, 22, 23]. Recent experimental findings have shown that slow, multiplicative fluctuations in neural excitability are a dominant source of neural covariability in extracellular multi-cell recordings from cortical circuits [5, 17, 24]. To accurately capture the the structure of neural dynamics and to disentangle the contributions of slow and fast modulatory processes to neural variability and co-variability, it is therefore important to develop models that can capture neural dynamics both on fast (i.e., within experimental trials) and slow (i.e., across trials) time-scales. Few such models exist: Czanner et al. [25] presented a statistical model of single-neuron firing in which within-trial dynamics are modelled by (generalised) linear coupling from the recent spiking history of each neuron onto its instantaneous firing rate, and acrosstrial dynamics were modelled by defining a random walk model over parameters. More recently, 1 Mangion et al [26] presented a latent linear dynamical system model with Poisson observations (PLDS, [8, 11, 13]) with a one-dimensional latent space, and used a heuristic filtering approach for tracking parameters, again based on a random-walk model. Rabinowitz et al [27] presented a technique for identifying slow modulatory inputs from the recordings of single neurons using a Gaussian Process model and an efficient inference technique using evidence optimisation. Here, we present a hierarchical model that consists of a latent dynamical system with Poisson observations (PLDS) to model neural population dynamics, combined with a Gaussian process (GP) [28] to model modulations in firing rates or model-parameters across experimental trials. The use of an exponential nonlinearity implies that latent modulations have a multiplicative effect on neural firing rates. Compared to previous models using random walks over parameters, using a GP is a more flexible and powerful way of modelling the statistical structure of non-stationarity, and makes it possible to use hyper-parameters that model the variability and smoothness of parameter-changes across time. In this paper, we focus on a concrete variant of this general model: We introduce a new set of variables which control neural firing rate on each trial to capture non-stationarity in firing rates. We derive a Bayesian Laplace propagation method for inferring the posterior distributions over the latent variables and the parameters from population recordings of spiking activity. Our approach generalises the 1-dimensional latent states in [26] to models with multi-dimensional states, as well as to a Bayesian treatment of non-stationarity based on Gaussian Process priors. The paper is organised as follows: In Sec. 2, we introduce our framework for constructing non-stationary neural population models, as well as the concrete model we will use for analyses. In Sec. 3, we derive the Bayesian Laplace propagation algorithm. In Sec. 4, we show applications to simulated data and neural population recordings from visual cortex. 2 Hierarchical non-stationary models of neural population dynamics We start by introducing a hierarchical model for capturing short time-scale population dynamics as well as long time-scale non-stationarities in firing rates. Although we use the term “non-stationary” to mean that the system is best described by parameters that change over time (which is how the term is often used in the context of neural data analysis), we note that the distribution over parameters can be described by a stochastic process which might be strictly stationary in the statistical sense1. Modelling framework We assume that the neural population activity of p neurons yt ∈Rp depends on a k-dimensional latent state xt ∈Rk and a modulatory factor h(i) ∈Rk which is different for each trial i = {1, . . . , r}. The latent state x models short-term co-variability of spiking activity and the modulatory factor h models slowly varying mean firing rates across experimental trials. We model neural spiking activity as conditionally Poisson given the latent state xt and a modulator h(i), with a log firing rate which is linear in parameters and latent factors, yt|xt, C, h(i), d ∼Poiss(yt| exp(C(xt + h(i)) + d)), where the loading matrix C ∈Rp×k specifies how each neuron is related to the latent state and the modulator, d ∈Rp is an offset term that controls the mean firing rate of each cell, and Poiss(yt|w) means that the ith entry of yt is drawn independently from Poisson distribution with mean wi (the ith entry of w). Because of the use of an exponential firing-rate nonlinearity, latent factors have a multiplicative effect on neural firing rates, as has been observed experimentally [17, 5]. Following [11, 13, 26], we assume that the latent dynamics evolve according to a first-order autoregressive process with Gaussian innovations, xt|xt−1, A, B, Q ∼N(xt|Axt−1 + But, Q). Here, we allow for sensory stimuli (or experimental covariates), ut ∈Rd to influence the latent states linearly. The dynamics matrix A ∈Rk×k determines the state evolution, B ∈Rk×d models the dependence of latent states on external inputs, and Q ∈Rk×k is the covariance of the innovation noise. We set Q to be the identity matrix, Q = Ik as in [29], and we assume x(i) 0 ∼N(0, Ik). 1A stochastic process is strict-sense stationary if its joint distribution over any two time-points t and s only depends on the elapsed time t −s. 2 recording 1 recording r Figure 1: Schematic of hierarchical nonstationary Poisson observation Latent Dynamical System (N-PLDS) for capturing nonstationarity in mean firing rates. The parameter h slowly varies across trials and leads to fluctuations in mean firing rates. The parameters in this model are θ = {A, B, C, d, h(1:r)}. We refer to this general model as nonstationary PLDS (N-PLDS). Different variants of N-PLDS can be constructed by placing priors on individual parameters which allow them to vary across trials (in which case they would then depend on the trial index i) or by omitting different components of the model2. For the modulator h, we assume that it varies across trials according to a GP with mean mh and (modified) squared exponential kernel, h(i) ∼GP(mh, K(i, j)), where the (i, j)th block of K (size k × k) is given by K(i, j) = (σ2 + ϵδi,j) exp −1 2τ 2 (i −j)2 Ik. Here, we assume the independent noise-variance on the diagonal (ϵ) to be constant and small as in [30]. When σ2 = ϵ = 0, the modulator vanishes, which corresponds to the conventional PLDS model with fixed parameters [11, 13]. When σ2 > 0, the mean firing rates vary across trials, and the parameter τ determines the timescale (in units of ‘trials’) of these fluctuations. We impose ridge priors on the model parameters (see Appendix for details), so that the total set of hyperparameters of the model is Φ = {mh, σ2, τ 2, φ}, where φ is the set of ridge parameters. 3 Bayesian Laplace propagation Our goal is to infer parameters and latent variables in the model. The exact posterior distribution is analytically intractable due to the use of a Poisson likelihood, and we therefore assume the joint posterior over the latent variables and parameters to be factorising, p(θ, x(1:r) 1:T |y(1:r) 1:T , Φ) ∝p(y(1:r) 1:T |x(1:r) 1:T , θ)p(x(1:r) 1:T |θ, Φ)p(θ|Φ) ≈q(θ, x(1:r) 1:T ) = qθ(θ) r Y i=1 qx(x(i) 0:T ). This factorisation simplifies computing the integrals involved in calculating a bound on the marginal likelihood of the observations, log p(y(1:r) 1:T |Φ) = log Z dθ dx(1:r) 1:T p(θ, x(1:r) 1:T , y(1:r) 1:T |Φ), ≥ Z dθ dx(1:r) 1:T q(θ, x(1:r) 1:T ) log p(θ, x(1:r) 1:T , y(1:r) 1:T |Φ) q(θ, x(1:r) 1:T ) . (1) Similar to variational Bayesian expectation maximization (VBEM) algorithm [29], our inference procedure consists of the following three steps: (1) we compute the approximate posterior over latent variables qx(x(1:r) 0:T ) by integrating out the parameters qx(x(1:r) 0:T ) ∝exp Z dθqθ(θ) log p(x(1:r) 1:T , y(1:r) 1:T |θ) , (2) which is performed by forward-backward message passing relying on the order-1 dependency in latent states. Then, (2) we compute the approximate posterior over parameters qθ(θ) by integrating out the latent variables, qθ(θ) ∝p(θ) exp Z dx(1:r) 0:T qx(x(1:r) 0:T ) log p(x(1:r) 0:T , y(1:r) 1:T |θ) , (3) and (3) we update the hyperparameters by computing the gradients of the bound on the eq. 1 after integrating out both latent variables and parameters. We iterate the three steps until convergence. Unfortunately, the integrals in both eq. 2 and eq. 3 are not analytically tractable, even with the Gaussian distributions for qx(x(1:r) 0:T ) and qθ(θ). For tractability and fast computation of messages in 2A second variant of the model, in which the dynamics matrix determining the spatio-temporal correlations in the population varies across trials, is described in the Appendix. 3 the forward-backward algorithm for eq. 2, we utilise the so-called Laplace propagation or Laplace expectation propagation (Laplace-EP) [31, 32, 33], which makes a Gaussian approximation to each message based on Laplace approximation, then propagates the messages forward and backward. While Laplace propagation in the prior work is commonly coupled with point estimates of parameters, we consider the posterior distribution over parameters. For this reason, we refer to our inference method as Bayesian Laplace propagation. The use of approximate message passing in the Laplace propagation implies that there is no longer a guarantee that the lower bound will increase monotonically in each iteration, which is the main difference between our method and the VBEM algorithm. We therefore monitored the convergence of our algorithm by computing one-step ahead prediction scores [13]. The algorithm proceeds by iterating the following three steps: (1) Approximating the posterior over latent states: Using the first-order dependency in latent states, we derive a sequential forward/backward algorithm to obtain qx(x(1:r) 0:T ), generalising the approach of [26] to multi-dimensional latent states. Since this step decouples across trials, it is easy to parallelize, and we omit the trial-indices for clarity. We note that computation of the approximate posterior in this step is not more expensive than Bayesian inference of the latent state in a ‘fixed parameter’ PLDS. The forward message α(xt) at time t is given by α(xt) ∝ Z dxt−1α(xt−1) exp ⟨log(p(xt|xt−1)p(yt|xt))⟩qθ(θ) . (4) Assuming that the forward message at time t −1 denoted by α(xt−1) is Gaussian, the Poisson likelihood term will render the forward message at time t non-Gaussian, but we will approximate α(xt) as a Gaussian using the first and second derivatives of the right-hand side of eq. 4 with respect to xt. Similarly, the backward message at time t −1 is given by β(xt−1) ∝ Z dxtβ(xt) exp ⟨log(p(xt|xt−1)p(yt|xt))⟩qθ(θ) , (5) which we also approximate to a Gaussian for tractability in computing backward messages. Using the forward/backward messages, we compute the posterior marginal distribution over latent variables (See Appendix). We need to compute the cross-covariance between neighbouring latent variables to obtain the sufficient statistics of latent variables (which we will need for updating the posterior over parameters). The pairwise marginals of latent variables are given by p(xt, xt+1|y1:T ) ∝ β(xt+1) exp ⟨log(p(yt+1|xt+1)p(xt+1|xt))⟩qθ(θ) α(xt), (6) which we approximate as a joint Gaussian distribution by using the first/second derivatives of eq. 6 and extracting the cross-covariance term from the joint covariance matrix. (2) Approximating the posterior over parameters: After inferring the posterior over latent states, we update the posterior distribution over the parameters. The posterior over parameters factorizes as qθ(θ) = qa,b(a, b) qc,d,h(c, d, h(1:r)), (7) where used the vectorized notations b = vec(B⊤) and c = vec(C⊤). We set c, d to the maximum likelihood estimates ˆc, ˆd for simplicity in inference. The computational cost of this algorithm is dominated by the cost of calculating the posterior distribution over h(1:r), which involves manipulation of a rk-dimensional Gaussian. While this was still tractable without further approximations for the data-set sizes used in our analyses below (hundreds of trials), a variety of approximate methods for GP-inference exist which could be used to improve efficiency of this computation. In particular, we will typically be dealing with systems in which τ ≫1, which means that the kernel-matrix is smooth and could be approximated using low-rank representations [28]. (3) Estimating hyperparameters: Finally, after obtaining the the approximate posterior q(θ, x(1:r) 0:T ), we update the hyperparameters of the prior by maximizing the lower bound with respect to the hyperparameters. The variational lower bound simplifies to (see Ch.5 in [29] for details, note that the usage of Gaussian approximate posteriors ensures that this step is analogous to hyper parameter updating in a fully Gaussian LDS) log p(y(1:r) 1:T |Φ) ≥−KL(Φ) + c, (8) 4 true z1 A group 1 −2 −1 0 10 20 30 40 50 60 70 80 90 100 −2 −1 trials log mean firing rate B group 2 N-PLDS PLDS Indep-PLDS neurons trial # 25 true z2 N-PLDS PLDS Indep-PLDS log mean firing rate 0 10s trial # 75 0 10s neurons neurons trial # 25 0 10s trial # 75 0 10s neurons −0.2 0 0.2 0.4 0.6 0.8 1 -5s -2.5s 0 2.5s 5s total cov (z) condi cov (z) N-PLDS Indep-PLDS PLDS true C group 1 population activity D group 2 population activity E covariance estimation 0 10 20 30 40 50 60 70 80 90 100 trials Figure 2: Illustration of non-stationarity in firing rates (simulated data). A, B Spike rates of 40 neurons are influenced by two slowly varying firing rate modulators. The log mean firing rates of the two groups of neurons are z1(red, group 1) and z2(blue, group 2) across 100 trials. C, D Raster plots show the extreme cases, i.e. trials 25 and 75. The traces show the posterior mean of z estimated by N-PLDS (light blue for z2, light red for z1), independent PLDSs (fit a PLDS to each trial data individually, dark gray), and PLDS (light gray). E Total and conditional (on each trial) covariance of recovered neural responses from each model (averaged across all neuron pairs, and then normalised for visualisation). The covariances recovered by our model (red) well match the true ones (black), while those by independent PLDSs (gray) and a single PLDS (light gray) do not. where c is a constant. Here, the KL divergence between the prior and posterior over parameters, denoted by N(µΦ, ΣΦ) and N(µ, Σ), respectively, is given by KL(Φ) = −1 2 log |Σ−1 Φ Σ| + 1 2Tr Σ−1 Φ Σ + 1 2(µ −µΦ)⊤Σ−1 Φ (µ −µΦ) + c, (9) where the prior mean and covariance depend on the hyperparameters. We update the hyperparameters by taking the derivative of KL w.r.t. each hyper parameter. For the prior mean, the first derivative expression provides a closed-form update. For τ (time scale of inter-trial fluctuations in firing rates) and σ2 (variance of inter-trial fluctuations), their derivative expressions do not provide a closed form update, in which case we compute the KL divergence on the grid defined in each hyperparameter space and choose the value that minimises KL. Predictive distributions for test data. In our model, different trials are no longer considered to be independent, so we can predict parameters for held-out trials. Using the GP model on h and our approximations, we have Gaussian predictive distributions on h∗for test data D∗given training data D: p(h∗|D, D∗) = N(mh + K∗K−1(µh −mh), K∗∗−K∗(K + H−1 h )−1K∗⊤), (10) where K is the prior covariance matrix on D and K∗∗is on D∗, and K∗is their prior crosscovariance as introduced in Ch.2 of [28], and the negative Hessian Hh is defined as Hh = − ∂2 ∂2h(1:h) r X i=1 [ Z dx(i) 0:T q(x(i) 0:T ) T X t=1 log p(y(i) t |x(i) t , ˆc, ˆd, h(i))]. (11) In the applications to simulated and neurophysiological data described in the following, we used this approach to predict the properties of neural dynamics on held-out trials. 4 Applications Simulated data: We first illustrate the performance of N-PLDS on a simulated population recording from 40 neurons consisting of 100 trials of length T = 200 time steps each. We used a 4-dimensional latent state and assumed that the population consisted of two homogeneous subpopulations of size 20 each, with one modulatory input controlling rate fluctuations in each group (See Fig. 2 A). In addition, we assumed that for half of each trial, there was a time-varying stimulus (‘drifting grating’), represented by a 3-dimensional vector which consisted of the sine and cosine 5 Trial 0 25 50 75 100 cell#1 cell#2 cell#3 A cell#4 cell#5 cell#6 Mean firing rate (Hz) B k 1 2 3 4 5 6 7 8 RMSE 0.1 0.2 N-PLDS PLDS 0.01 0.02 k 1 2 3 4 5 6 7 8 5 most non-stationary neurons 5 most stationary neurons N-PLDS PLDS data k 1 2 3 4 5 6 7 8 0.05 0.07 all neurons (64) 5 most non-stationary neurons 5 most stationary neurons 0 10 5 15 0 10 5 15 0 10 0 2 cell#7 0 2 cell#8 0 2 cell#9 0 2 cell#10 0 2 Trial 0 25 50 75 100 Figure 3: Non-stationary firing rates in a population of V1 neurons. A: Mean firing rates of neurons (black trace) across trials. Left: The 5 most non-stationary neurons. Right: The 5 most stationary neurons. The fitted (solid line) and the predicted (circles) mean firing rates are also shown for N-PLDS (in red) and PLDS (in gray). B Left: The RMSE in predicting single neuron firing rates across 5 most non-stationary neurons for varying latent dimensionalities k, where N-PLDS achieves significantly lower RMSE. Middle: RMSE for the 5 most stationary neurons, where there is no difference between two methods (apart from an outlier at k=8). Right: RMSE for the all 64 neurons. of the time-varying phase of the stimulus (frequency 0.4 Hz) as well as an additional binary term which indicated whether the stimulus was active. We fit N-PLDS to the data, and found that it successfully captures the non-stationarity in (log) mean firing rates, defined by z = C(x + h) + d, as shown in Fig. 2, and recovers the total and trialconditioned covariances (the across-trial mean of the single-trial covariances of z). For comparison, we also fit 100 separate PLDSs to the data from each trial, as well as a single PLDS to the entire data. The naive approach of fitting an individual PLDS to each trial can, in principle, follow the modulation. However, as each model is only fit to one trial, the parameter-estimates are very noisy since they are not sufficiently constrained by the data from each trial. We note that a single PLDS with fixed parameters (as is conventionally used in neural data analysis) is able to track the modulations in firing rates in the posterior mean here– however, a single PLDS would not be able to extrapolate firing rates for unseen trials (as we will demonstrate in our analyses on neural data below). In addition, it will also fail to separate ‘slow’ and ‘fast’ modulations into different parameters. By comparing the total covariance of the data (averaged across neuron pairs) to the ‘trial-conditioned’ covariance (calculated by estimating the covariance on each trial individually, and averaging covariances) one can calculate how much of the cross-neuron co-variability can be explained by across-trials fluctuations in firing rates (see e.g., [17]). In this simulation shown in Fig. 2 (which illustrates an extreme case dominated by strong across-trial effects), the conditional covariance is much smaller than the full covariance. 6 Neurophysiological data: How big are non-stationarities in neural population recordings, and can our model successfully capture them? To address these questions, we analyzed a population recording from anaesthetized macaque primary visual cortex consisting of 64 neurons stimulated by sine grating stimuli. The details of data collection are described in [5], but our data-set also included units not used in the original study. We binned the spikes recorded during 100 trials of length 4s (stimulus was on for 2s) of the same orientation using 50ms bins, resulting in trials of length T = 80 bins. Analogously to the simulated dataset above, we parameterised the stimulus as a 3-dimensional vector of the sine and cosine with the same temporal frequency of the drifting grating, as well as an indicator that specifies whether there is a stimulus or not. We used 10-fold cross validation to evaluate performance of the model, i.e. repeatedly divided the data into test data (10 trials) and training data (the remaining 90 trials). We fit the model on each training set, and using the estimated parameters from the training data, we made predictions on the modulator h on test data by using the mean of the predictive distribution over h. We note that, in contrast to conventional applications of cross-validation which assume i.i.d. trials, our model here also takes into correlations in firing rates across trials– therefore, we had to keep the trial-indices in order to compute predictive distributions for test data using formulas in eq. 10. Using these parameters, we drew samples for spikes for the entire trials to compute the mean firing rates of each neuron at each trial. For comparison, we also fit a single PLDS to the data. As this model does not allow for across-trial modulations of firing rates, we simply kept the parameters estimated from the training data. For visualisation of results, we quantified the ‘non-stationarity’ of each neuron by first smoothing its firing rate across trials (using a kernel of size 10 trials), calculating the variance of the smoothed firing rate estimate, and displaying firing rates for the 5 most non-stationary neurons in the population (Fig. 3A, left) as well as 5 most stationary neurons (Fig. 3A, right). Importantly, the firing-rates were also correctly interpolated for held out trials (circles in Fig. 3A). To evaluate whether the additional parameters in N-PLDS result in a superior model compared to conventional PLDS [13], we tested the model with different latent dimensionalities ranging from k = 1 to k = 8, and compared each model against a ‘fixed’ PLDS of matched dimensionality (Fig. 3B). We estimated predicted firing rates on held out trials by sampling 1000 replicate trials from the predictive distribution for both models and compared the median (across samples) of the mean firing rates of each neuron to those of the data. The shown RMSE values are the errors of predicted firing rate (in Hz) per neuron per held out trial (population mean across all neurons and trials is 4.54 Hz). We found that N-PLDS outperformed PLDS provided that we had sufficiently many latent states, at least k > 3. For large latent dimensionalities (k > 8) performance degraded again, which could be a consequence of overfitting. Furthermore, we show that for non-stationary neurons there is a large gain in predictive power (Fig. 3B, left), whereas for stationary neurons PLDS and N-PLDS have similar prediction accuracy (Fig. 3B, middle). The RMSE on firing rates for all neurons (Fig. 3B, right) suggests that our model correctly identified the fluctuation in firing rates. We also wanted to gain insights into the temporal scale of the underlying non-stationarities. We first looked at the recovered time-scales τ of the latent modulators, and found them to be highly preserved across multiple training folds, and, importantly, across different values of the latent dimensionalities, consistently peaked near 10 trials (Fig. 4 A). We made sure that the peak near 10 trials is not merely a consequence of parameter initialization– parameters were initialised by fitting a Gaussian Process with a exponentiated quadratic one-dimensional kernel to each neuron’s mean firing rate over trials individually, then taking the mean time-scale over neurons as the initial global time-scale for our kernel. The initial values were 8.12 ± 0.01, differing slightly between training sets. Similarly, we checked that the parameters of the final model (after 30 iterations of Bayesian Laplace propagation), were indeed superior to the initial values, by monitoring the prediction error on held-out trials. Furthermore, due to introducing a smooth change with the correct time scale in the latent space (e.g., the posterior mean of h across trials shown in Fig. 4B), we find that N-PLDS recovers more of the time-lagged covariance of neurons compared to the fixed PLDS model (Fig. 4C). 5 Discussion Non-stationarities are ubiquitous in neural data: Slow modulations in firing properties can result from diverse processes such as plasticity and learning, fluctuations in arousal, cortical reorganisation after injury as well as development and aging. In addition, non-stationarities in neural data can also be a consequence of experimental artifacts, and can be caused by fluctuations in anaesthesia level, 7 A N-PLDS PLDS data −5 0 5 10 Trial index 0 100 −10 Estimated Modulators 5 10 15 20 0 Time-scale estimates Count B 10 5 15 Normalized mean autocovariance C Time-scale (trials) Time lag (ms) -500 0 500 0 0.2 0.4 0.6 0.8 1 Figure 4: Non-stationary firing rates in a population of V1 neurons (continued). A: Histogram of time-constants across different latent dimensionalities and training sets. Mean at 10.4 is indicated by the vertical red line. B: Estimated 7-dimensional modulator (the posterior mean of h). The modulator with an estimated length scale of approximately 10 trials is smoothly varying across trials. C: Comparison of normalized mean auto-covariance across neurons. stability of the physiological preparation or electrode drift. Whatever the origins of non-stationarities are, it is important to have statistical models which can identify them and disentangle their effects from correlations and dynamics on faster time-scales [16]. We here presented a hierarchical model for neural population dynamics in the presence of nonstationarity. Specifically, we concentrated on a variant of this model which focuses on nonstationarity in firing rates. Recent experimental studies have shown that slow fluctuations in neural excitability which have a multiplicative effect on neural firing rates are a dominant source of noise correlations in anaesthetized visual cortex [17, 5, 24]. Because of the exponential spiking nonlinearity employed in our model, the latent additive fluctuations in the modulator-variables also have a multiplicative effect on firing rates. Applied to a data-set of neurophysiological recordings, we demonstrated that this modelling approach can successfully capture non-stationarities in neurophysiological recordings from primary visual cortex. In our model, both neural dynamics and latent modulators are mediated by the same low-dimensional subspace (parameterised by C). We note, however, that this assumption does not imply that neurons with strong short-term correlations will also have strong long-term correlations, as different dimensions of this subspace (as long as it is chosen big enough) could be occupied by short and long term correlations, respectively. In our applications to neural data, we found that the latent state had to be at least three-dimensional for the non-stationary model to outperform a stationary dynamics model, and it might be the case that at least three dimensions are necessary to capture both fast and slow correlations. It is an open question of how correlations on fast and slow timescales are related [17], and the techniques presented have the potential to be of use for mapping out their relationships. There are limitations to the current study: (1) We did not address the question of how to select amongst multiple different models which could be used to model neural non-stationarity for a given dataset; (2) we did not present numerical techniques for how to scale up the current algorithm for larger trial numbers (e.g., using low-rank approximations to the covariance matrix) or large neural populations; and (3) we did not address the question of how to overcome the slow convergence properties of GP kernel parameter estimation [34]. (4) While Laplace propagation is flexible, it is an approximate inference technique, and the quality of its approximations might vary for different models of tasks. We believe that extending our method to address these questions provides an exciting direction for future research, and will result in a powerful set of statistical methods for investigating how neural systems operate in the presence of non-stationarity. Acknowledgments We thank Alexander Ecker and the lab of Andreas Tolias for sharing their data with us [5] (see http://toliaslab.org/publications/ecker-et-al-2014/), and for allowing us to use it in this publication, as well as Maneesh Sahani and Alexander Ecker for valuable comments. This work was funded by the Gatsby Charitable Foundation (MP and GB) and the German Federal Ministry of Education and Research (MP and JHM) through BMBF; FKZ:01GQ1002 (Bernstein Center T¨ubingen). Code available at http://www.mackelab.org/code. 8 References [1] A. Renart and C. K. Machens. Variability in neural activity and behavior. Curr Opin Neurobiol, 25:211– 20, 2014. [2] A. Destexhe. Intracellular and computational evidence for a dominant role of internal network activity in cortical computations. Curr Opin Neurobiol, 21(5):717–725, 2011. [3] G. Maimon. Modulation of visual physiology by behavioral state in monkeys, mice, and flies. Curr Opin Neurobiol, 21(4):559–64, 2011. [4] K. D. Harris and A. Thiele. Cortical state and attention. Nat Rev Neurosci, 12(9):509–523, 2011. [5] Ecker et al. State dependence of noise correlations in macaque primary visual cortex. Neuron, 82(1):235– 48, 2014. [6] Ralf M Haefner, Pietro Berkes, and J´ozsef Fiser. Perceptual decision-making as probabilistic inference by neural sampling. arXiv preprint arXiv:1409.0257, 2014. [7] Alexander S Ecker, George H Denfield, Matthias Bethge, and Andreas S Tolias. On the structure of population activity under fluctuations in attentional state. bioRxiv, page 018226, 2015. [8] A. C. Smith and E. N. Brown. Estimating a state-space model from point process observations. Neural Comput, 15(5):965–91, 2003. [9] U. T. Eden, L. M. Frank, R. Barbieri, V. Solo, and E. N. Brown. Dynamic analysis of neural encoding by point process adaptive filtering. Neural Comput, 16(5):971–98, 2004. [10] B. M. Yu, A. Afshar, G. Santhanam, S. I. Ryu, K. Shenoy, and M. Sahani. Extracting dynamical structure embedded in neural activity. In NIPS 18, pages 1545–1552. MIT Press, Cambridge, MA, 2006. [11] J. E. Kulkarni and L. Paninski. Common-input models for multiple neural spike-train data. Network, 18(4):375–407, 2007. [12] W. Truccolo, L. R. Hochberg, and J. P. Donoghue. Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. Nat Neurosci, 13(1):105–111, 2010. [13] J. H. Macke, L. Buesing, J. P. Cunningham, B. M. Yu, K. V. Shenoy, and M. Sahani. Empirical models of spiking in neural populations. In NIPS, pages 1350–1358, 2011. [14] C. van Vreeswijk and H. Sompolinsky. Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293):1724–6, 1996. [15] G. J. Tomko and D. R. Crapper. Neuronal variability: non-stationary responses to identical visual stimuli. Brain Res, 79(3):405–18, 1974. [16] C. D. Brody. Correlations without synchrony. Neural Comput, 11(7):1537–51, 1999. [17] R. L. T. Goris, J. A. Movshon, and E. P. Simoncelli. Partitioning neuronal variability. Nat Neurosci, 17(6):858–65, 2014. [18] C. D. Gilbert and W. Li. Adult visual cortical plasticity. Neuron, 75(2):250–64, 2012. [19] E. N. Brown, D. P. Nguyen, L. M. Frank, M. A. Wilson, and V. Solo. An analysis of neural receptive field plasticity by point process adaptive filtering. Proc Natl Acad Sci U S A, 98(21):12261–6, 2001. [20] Frank et al. Contrasting patterns of receptive field plasticity in the hippocampus and the entorhinal cortex: an adaptive filtering approach. J Neurosci, 22(9):3817–30, 2002. [21] N. A. Lesica and G. B. Stanley. Improved tracking of time-varying encoding properties of visual neurons by extended recursive least-squares. IEEE Trans Neural Syst Rehabil Eng, 13(2):194–200, 2005. [22] V. Ventura, C. Cai, and R.E. Kass. Trial-to-Trial Variability and Its Effect on Time-Varying Dependency Between Two Neurons, 2005. [23] C. S. Quiroga-Lombard, J. Hass, and D. Durstewitz. Method for stationarity-segmentation of spike train data with application to the pearson cross-correlation. J Neurophysiol, 110(2):562–72, 2013. [24] Sch¨olvinck et al. Cortical state determines global variability and correlations in visual cortex. J Neurosci, 35(1):170–8, 2015. [25] Gabriela C., Uri T. E., Sylvia W., Marianna Y., Wendy A. S., and Emery N. B. Analysis of between-trial and within-trial neural spiking dynamics. Journal of Neurophysiology, 99(5):2672–2693, 2008. [26] Mangion et al. Online variational inference for state-space models with point-process observations. Neural Comput, 23(8):1967–1999, 2011. [27] Neil C Rabinowitz, Robbe LT Goris, Johannes Ball´e, and Eero P Simoncelli. A model of sensory neural responses in the presence of unknown modulatory inputs. arXiv preprint arXiv:1507.01497, 2015. [28] C.E. Rasmussen and C.K.I. Williams. Gaussian processes for machine learning. MIT Press Cambridge, MA, USA, 2006. [29] M. J. Beal. Variational Algorithms for Approximate Bayesian Inference. PhD thesis, Gatsby Unit, University College London, 2003. [30] Yu et al. Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. 102(1):614–635, 2009. [31] A. J. Smola, V. Vishwanathan, and E. Eskin. Laplace propagation. In Sebastian Thrun, Lawrence K. Saul, and Bernhard Sch¨olkopf, editors, NIPS, pages 441–448. MIT Press, 2003. [32] A. Ypma and T. Heskes. Novel approximations for inference in nonlinear dynamical systems using expectation propagation. Neurocomput., 69(1-3):85–99, 2005. [33] K. V. Shenoy B. M. Yu and M. Sahani. Expectation propagation for inference in non-linear dynamical models with poisson observations. In Proc IEEE Nonlinear Statistical Signal Processing Workshop, 2006. [34] I. Murray and R. P. Adams. Slice sampling covariance hyperparameters of latent Gaussian models. In NIPS 23, pages 1723–1731. 2010. 9 | 2015 | 66 |
5,957 | Pointer Networks Oriol Vinyals∗ Google Brain Meire Fortunato∗ Department of Mathematics, UC Berkeley Navdeep Jaitly Google Brain Abstract We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence [1] and Neural Turing Machines [2], because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems – finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem – using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems. 1 Introduction Recurrent Neural Networks (RNNs) have been used for learning functions over sequences from examples for more than three decades [3]. However, their architecture limited them to settings where the inputs and outputs were available at a fixed frame rate (e.g. [4]). The recently introduced sequence-to-sequence paradigm [1] removed these constraints by using one RNN to map an input sequence to an embedding and another (possibly the same) RNN to map the embedding to an output sequence. Bahdanau et. al. augmented the decoder by propagating extra contextual information from the input using a content-based attentional mechanism [5, 2, 6, 7]. These developments have made it possible to apply RNNs to new domains, achieving state-of-the-art results in core problems in natural language processing such as translation [1, 5] and parsing [8], image and video captioning [9, 10], and even learning to execute small programs [2, 11]. Nonetheless, these methods still require the size of the output dictionary to be fixed a priori. Because of this constraint we cannot directly apply this framework to combinatorial problems where the size of the output dictionary depends on the length of the input sequence. In this paper, we address this limitation by repurposing the attention mechanism of [5] to create pointers to input elements. We show that the resulting architecture, which we name Pointer Networks (Ptr-Nets), can be trained to output satisfactory solutions to three combinatorial optimization problems – computing planar convex hulls, Delaunay triangulations and the symmetric planar Travelling Salesman Problem (TSP). The resulting models produce approximate solutions to these problems in a purely data driven fash∗Equal contribution 1 (a) Sequence-to-Sequence (b) Ptr-Net Figure 1: (a) Sequence-to-Sequence - An RNN (blue) processes the input sequence to create a code vector that is used to generate the output sequence (purple) using the probability chain rule and another RNN. The output dimensionality is fixed by the dimensionality of the problem and it is the same during training and inference [1]. (b) Ptr-Net - An encoding RNN converts the input sequence to a code (blue) that is fed to the generating network (purple). At each step, the generating network produces a vector that modulates a content-based attention mechanism over inputs ([5, 2]). The output of the attention mechanism is a softmax distribution with dictionary size equal to the length of the input. ion (i.e., when we only have examples of inputs and desired outputs). The proposed approach is depicted in Figure 1. The main contributions of our work are as follows: • We propose a new architecture, that we call Pointer Net, which is simple and effective. It deals with the fundamental problem of representing variable length dictionaries by using a softmax probability distribution as a “pointer”. • We apply the Pointer Net model to three distinct non-trivial algorithmic problems involving geometry. We show that the learned model generalizes to test problems with more points than the training problems. • Our Pointer Net model learns a competitive small scale (n ≤50) TSP approximate solver. Our results demonstrate that a purely data driven approach can learn approximate solutions to problems that are computationally intractable. 2 Models We review the sequence-to-sequence [1] and input-attention models [5] that are the baselines for this work in Sections 2.1 and 2.2. We then describe our model - Ptr-Net in Section 2.3. 2.1 Sequence-to-Sequence Model Given a training pair, (P, CP), the sequence-to-sequence model computes the conditional probability p(CP|P; θ) using a parametric model (an RNN with parameters θ) to estimate the terms of the probability chain rule (also see Figure 1), i.e. p(CP|P; θ) = m(P) Y i=1 p(Ci|C1, . . . , Ci−1, P; θ). (1) 2 Here P = {P1, . . . , Pn} is a sequence of n vectors and CP = {C1, . . . , Cm(P)} is a sequence of m(P) indices, each between 1 and n (we note that the target sequence length m(P) is, in general, a function of P). The parameters of the model are learnt by maximizing the conditional probabilities for the training set, i.e. θ∗= arg max θ X P,CP log p(CP|P; θ), (2) where the sum is over training examples. As in [1], we use an Long Short Term Memory (LSTM) [12] to model p(Ci|C1, . . . , Ci−1, P; θ). The RNN is fed Pi at each time step, i, until the end of the input sequence is reached, at which time a special symbol, ⇒is input to the model. The model then switches to the generation mode until the network encounters the special symbol ⇐, which represents termination of the output sequence. Note that this model makes no statistical independence assumptions. We use two separate RNNs (one to encode the sequence of vectors Pj, and another one to produce or decode the output symbols Ci). We call the former RNN the encoder and the latter the decoder or the generative RNN. During inference, given a sequence P, the learnt parameters θ∗are used to select the sequence ˆCP with the highest probability, i.e., ˆCP = arg max CP p(CP|P; θ∗). Finding the optimal sequence ˆC is computationally impractical because of the combinatorial number of possible output sequences. Instead we use a beam search procedure to find the best possible sequence given a beam size. In this sequence-to-sequence model, the output dictionary size for all symbols Ci is fixed and equal to n, since the outputs are chosen from the input. Thus, we need to train a separate model for each n. This prevents us from learning solutions to problems that have an output dictionary with a size that depends on the input sequence length. Under the assumption that the number of outputs is O(n) this model has computational complexity of O(n). However, exact algorithms for the problems we are dealing with are more costly. For example, the convex hull problem has complexity O(n log n). The attention mechanism (see Section 2.2) adds more “computational capacity” to this model. 2.2 Content Based Input Attention The vanilla sequence-to-sequence model produces the entire output sequence CP using the fixed dimensional state of the recognition RNN at the end of the input sequence P. This constrains the amount of information and computation that can flow through to the generative model. The attention model of [5] ameliorates this problem by augmenting the encoder and decoder RNNs with an additional neural network that uses an attention mechanism over the entire sequence of encoder RNN states. For notation purposes, let us define the encoder and decoder hidden states as (e1, . . . , en) and (d1, . . . , dm(P)), respectively. For the LSTM RNNs, we use the state after the output gate has been component-wise multiplied by the cell activations. We compute the attention vector at each output time i as follows: ui j = vT tanh(W1ej + W2di) j ∈(1, . . . , n) ai j = softmax(ui j) j ∈(1, . . . , n) (3) d′ i = n X j=1 ai jej where softmax normalizes the vector ui (of length n) to be the “attention” mask over the inputs, and v, W1, and W2 are learnable parameters of the model. In all our experiments, we use the same hidden dimensionality at the encoder and decoder (typically 512), so v is a vector and W1 and W2 are square matrices. Lastly, d′ i and di are concatenated and used as the hidden states from which we make predictions and which we feed to the next time step in the recurrent model. Note that for each output we have to perform n operations, so the computational complexity at inference time becomes O(n2). 3 This model performs significantly better than the sequence-to-sequence model on the convex hull problem, but it is not applicable to problems where the output dictionary size depends on the input. Nevertheless, a very simple extension (or rather reduction) of the model allows us to do this easily. 2.3 Ptr-Net We now describe a very simple modification of the attention model that allows us to apply the method to solve combinatorial optimization problems where the output dictionary size depends on the number of elements in the input sequence. The sequence-to-sequence model of Section 2.1 uses a softmax distribution over a fixed sized output dictionary to compute p(Ci|C1, . . . , Ci−1, P) in Equation 1. Thus it cannot be used for our problems where the size of the output dictionary is equal to the length of the input sequence. To solve this problem we model p(Ci|C1, . . . , Ci−1, P) using the attention mechanism of Equation 3 as follows: ui j = vT tanh(W1ej + W2di) j ∈(1, . . . , n) p(Ci|C1, . . . , Ci−1, P) = softmax(ui) where softmax normalizes the vector ui (of length n) to be an output distribution over the dictionary of inputs, and v, W1, and W2 are learnable parameters of the output model. Here, we do not blend the encoder state ej to propagate extra information to the decoder, but instead, use ui j as pointers to the input elements. In a similar way, to condition on Ci−1 as in Equation 1, we simply copy the corresponding PCi−1 as the input. Both our method and the attention model can be seen as an application of content-based attention mechanisms proposed in [6, 5, 2, 7]. We also note that our approach specifically targets problems whose outputs are discrete and correspond to positions in the input. Such problems may be addressed artificially – for example we could learn to output the coordinates of the target point directly using an RNN. However, at inference, this solution does not respect the constraint that the outputs map back to the inputs exactly. Without the constraints, the predictions are bound to become blurry over longer sequences as shown in sequence-to-sequence models for videos [13]. 3 Motivation and Datasets Structure In the following sections, we review each of the three problems we considered, as well as our data generation protocol.1 In the training data, the inputs are planar point sets P = {P1, . . . , Pn} with n elements each, where Pj = (xj, yj) are the cartesian coordinates of the points over which we find the convex hull, the Delaunay triangulation or the solution to the corresponding Travelling Salesman Problem. In all cases, we sample from a uniform distribution in [0, 1] × [0, 1]. The outputs CP = {C1, . . . , Cm(P)} are sequences representing the solution associated to the point set P. In Figure 2, we find an illustration of an input/output pair (P, CP) for the convex hull and the Delaunay problems. 3.1 Convex Hull We used this example as a baseline to develop our models and to understand the difficulty of solving combinatorial problems with data driven approaches. Finding the convex hull of a finite number of points is a well understood task in computational geometry, and there are several exact solutions available (see [14, 15, 16]). In general, finding the (generally unique) solution has complexity O(n log n), where n is the number of points considered. The vectors Pj are uniformly sampled from [0, 1] × [0, 1]. The elements Ci are indices between 1 and n corresponding to positions in the sequence P, or special tokens representing beginning or end of sequence. See Figure 2 (a) for an illustration. To represent the output as a sequence, we start from the point with the lowest index, and go counter-clockwise – this is an arbitrary choice but helps reducing ambiguities during training. 1We will release all the datasets at hidden for reference. 4 (a) Input P = {P1, . . . , P10}, and the output sequence CP = {⇒, 2, 4, 3, 5, 6, 7, 2, ⇐} representing its convex hull. P1 P2 P3 P4 P5 (b) Input P = {P1, . . . , P5}, and the output CP = {⇒, (1, 2, 4), (1, 4, 5), (1, 3, 5), (1, 2, 3), ⇐} representing its Delaunay Triangulation. Figure 2: Input/output representation for (a) convex hull and (b) Delaunay triangulation. The tokens ⇒and ⇐represent beginning and end of sequence, respectively. 3.2 Delaunay Triangulation A Delaunay triangulation for a set P of points in a plane is a triangulation such that each circumcircle of every triangle is empty, that is, there is no point from P in its interior. Exact O(n log n) solutions are available [17], where n is the number of points in P. In this example, the outputs CP = {C1, . . . , Cm(P)} are the corresponding sequences representing the triangulation of the point set P . Each Ci is a triple of integers from 1 to n corresponding to the position of triangle vertices in P or the beginning/end of sequence tokens. See Figure 2 (b). We note that any permutation of the sequence CP represents the same triangulation for P, additionally each triangle representation Ci of three integers can also be permuted. Without loss of generality, and similarly to what we did for convex hulls at training time, we order the triangles Ci by their incenter coordinates (lexicographic order) and choose the increasing triangle representation2. Without ordering, the models learned were not as good, and finding a better ordering that the Ptr-Net could better exploit is part of future work. 3.3 Travelling Salesman Problem (TSP) TSP arises in many areas of theoretical computer science and is an important algorithm used for microchip design or DNA sequencing. In our work we focused on the planar symmetric TSP: given a list of cities, we wish to find the shortest possible route that visits each city exactly once and returns to the starting point. Additionally, we assume the distance between two cities is the same in each opposite direction. This is an NP-hard problem which allows us to test the capabilities and limitations of our model. The input/output pairs (P, CP) have a similar format as in the Convex Hull problem described in Section 3.1. P will be the cartesian coordinates representing the cities, which are chosen randomly in the [0, 1] × [0, 1] square. CP = {C1, . . . , Cn} will be a permutation of integers from 1 to n representing the optimal path (or tour). For consistency, in the training dataset, we always start in the first city without loss of generality. To generate exact data, we implemented the Held-Karp algorithm [18] which finds the optimal solution in O(2nn2) (we used it up to n = 20). For larger n, producing exact solutions is extremely costly, therefore we also considered algorithms that produce approximated solutions: A1 [19] and A2 [20], which are both O(n2), and A3 [21] which implements the O(n3) Christofides algorithm. The latter algorithm is guaranteed to find a solution within a factor of 1.5 from the optimal length. Table 2 shows how they performed in our test sets. 2We choose Ci = (1, 2, 4) instead of (2,4,1) or any other permutation. 5 4 Empirical Results 4.1 Architecture and Hyperparameters No extensive architecture or hyperparameter search of the Ptr-Net was done in the work presented here, and we used virtually the same architecture throughout all the experiments and datasets. Even though there are likely some gains to be obtained by tuning the model, we felt that having the same model hyperparameters operate on all the problems makes the main message of the paper stronger. As a result, all our models used a single layer LSTM with either 256 or 512 hidden units, trained with stochastic gradient descent with a learning rate of 1.0, batch size of 128, random uniform weight initialization from -0.08 to 0.08, and L2 gradient clipping of 2.0. We generated 1M training example pairs, and we did observe overfitting in some cases where the task was simpler (i.e., for small n). Training generally converged after 10 to 20 epochs. 4.2 Convex Hull We used the convex hull as the guiding task which allowed us to understand the deficiencies of standard models such as the sequence-to-sequence approach, and also setting up our expectations on what a purely data driven model would be able to achieve with respect to an exact solution. We reported two metrics: accuracy, and area covered of the true convex hull (note that any simple polygon will have full intersection with the true convex hull). To compute the accuracy, we considered two output sequences C1 and C2 to be the same if they represent the same polygon. For simplicity, we only computed the area coverage for the test examples in which the output represents a simple polygon (i.e., without self-intersections). If an algorithm fails to produce a simple polygon in more than 1% of the cases, we simply reported FAIL. The results are presented in Table 1. We note that the area coverage achieved with the Ptr-Net is close to 100%. Looking at examples of mistakes, we see that most problems come from points that are aligned (see Figure 3 (d) for a mistake for n = 500) – this is a common source of errors in most algorithms to solve the convex hull. It was seen that the order in which the inputs are presented to the encoder during inference affects its performance. When the points on the true convex hull are seen “late” in the input sequence, the accuracy is lower. This is possibly the network does not have enough processing steps to “update” the convex hull it computed until the latest points were seen. In order to overcome this problem, we used the attention mechanism described in Section 2.2, which allows the decoder to look at the whole input at any time. This modification boosted the model performance significantly. We inspected what attention was focusing on, and we observed that it was “pointing” at the correct answer on the input side. This inspired us to create the Ptr-Net model described in Section 2.3. More than outperforming both the LSTM and the LSTM with attention, our model has the key advantage of being inherently variable length. The bottom half of Table 1 shows that, when training our model on a variety of lengths ranging from 5 to 50 (uniformly sampled, as we found other forms of curriculum learning to not be effective), a single model is able to perform quite well on all lengths it has been trained on (but some degradation for n = 50 can be observed w.r.t. the model trained only on length 50 instances). More impressive is the fact that the model does extrapolate to lengths that it has never seen during training. Even for n = 500, our results are satisfactory and indirectly indicate that the model has learned more than a simple lookup. Neither LSTM or LSTM with attention can be used for any given n′ ̸= n without training a new model on n′. 4.3 Delaunay Triangulation The Delaunay Triangulation test case is connected to our first problem of finding the convex hull. In fact, the Delaunay Triangulation for a given set of points triangulates the convex hull of these points. We reported two metrics: accuracy and triangle coverage in percentage (the percentage of triangles the model predicted correctly). Note that, in this case, for an input point set P, the output sequence C(P) is, in fact, a set. As a consequence, any permutation of its elements will represent the same triangulation. 6 Table 1: Comparison between LSTM, LSTM with attention, and our Ptr-Net model on the convex hull problem. Note that the baselines must be trained on the same n that they are tested on. 5-50 means the dataset had a uniform distribution over lengths from 5 to 50. METHOD TRAINED n n ACCURACY AREA LSTM [1] 50 50 1.9% FAIL +ATTENTION [5] 50 50 38.9% 99.7% PTR-NET 50 50 72.6% 99.9% LSTM [1] 5 5 87.7% 99.6% PTR-NET 5-50 5 92.0% 99.6% LSTM [1] 10 10 29.9% FAIL PTR-NET 5-50 10 87.0% 99.8% PTR-NET 5-50 50 69.6% 99.9% PTR-NET 5-50 100 50.3% 99.9% PTR-NET 5-50 200 22.1% 99.9% PTR-NET 5-50 500 1.3% 99.2% Ground Truth Predictions (a) LSTM, m=50, n=50 Ground Truth (b) Truth, n=50 Ground Truth: tour length is 3.518 (c) Truth, n=20 Ground Truth Predictions (d) Ptr-Net, m=5-50, n=500 Predictions (e) Ptr-Net , m=50, n=50 Predictions: tour length is 3.523 (f) Ptr-Net , m=5-20, n=20 Figure 3: Examples of our model on Convex hulls (left), Delaunay (center) and TSP (right), trained on m points, and tested on n points. A failure of the LSTM sequence-to-sequence model for Convex hulls is shown in (a). Note that the baselines cannot be applied to a different length from training. Using the Ptr-Net model for n = 5, we obtained an accuracy of 80.7% and triangle coverage of 93.0%. For n = 10, the accuracy was 22.6% and the triangle coverage 81.3%. For n = 50, we did not produce any precisely correct triangulation, but obtained 52.8% triangle coverage. See the middle column of Figure 3 for an example for n = 50. 4.4 Travelling Salesman Problem We considered the planar symmetric travelling salesman problem (TSP), which is NP-hard as the third problem. Similarly to finding convex hulls, it also has sequential outputs. Given that the PtrNet implements an O(n2) algorithm, it was unclear if it would have enough capacity to learn a useful algorithm solely from data. As discussed in Section 3.3, it is feasible to generate exact solutions for relatively small values of n to be used as training data. For larger n, due to the importance of TSP, good and efficient algorithms providing reasonable approximate solutions exist. We used three different algorithms in our experiments – A1, A2, and A3 (see Section 3.3 for references). 7 Table 2: Tour length of the Ptr-Net and a collection of algorithms on a small scale TSP problem. n OPTIMAL A1 A2 A3 PTR-NET 5 2.12 2.18 2.12 2.12 2.12 10 2.87 3.07 2.87 2.87 2.88 50 (A1 TRAINED) N/A 6.46 5.84 5.79 6.42 50 (A3 TRAINED) N/A 6.46 5.84 5.79 6.09 5 (5-20 TRAINED) 2.12 2.18 2.12 2.12 2.12 10 (5-20 TRAINED) 2.87 3.07 2.87 2.87 2.87 20 (5-20 TRAINED) 3.83 4.24 3.86 3.85 3.88 25 (5-20 TRAINED) N/A 4.71 4.27 4.24 4.30 30 (5-20 TRAINED) N/A 5.11 4.63 4.60 4.72 40 (5-20 TRAINED) N/A 5.82 5.27 5.23 5.91 50 (5-20 TRAINED) N/A 6.46 5.84 5.79 7.66 Table 2 shows all of our results on TSP. The number reported is the length of the proposed tour. Unlike the convex hull and Delaunay triangulation cases, where the decoder was unconstrained, in this example we set the beam search procedure to only consider valid tours. Otherwise, the Ptr-Net model would sometimes output an invalid tour – for instance, it would repeat two cities or decided to ignore a destination. This procedure was relevant for n > 20: for n ≤20, the unconstrained decoding failed less than 1% of the cases, and thus was not necessary. For 30, which goes beyond the longest sequence seen in training, failure rate went up to 35%, and for 40, it went up to 98%. The first group of rows in the table show the Ptr-Net trained on optimal data, except for n = 50, since that is not feasible computationally (we trained a separate model for each n). Interestingly, when using the worst algorithm (A1) data to train the Ptr-Net, our model outperforms the algorithm that is trying to imitate. The second group of rows in the table show how the Ptr-Net trained on optimal data with 5 to 20 cities can generalize beyond that. The results are virtually perfect for n = 25, and good for n = 30, but it seems to break for 40 and beyond (still, the results are far better than chance). This contrasts with the convex hull case, where we were able to generalize by a factor of 10. However, the underlying algorithms have greater complexity than O(n log n), which could explain this. 5 Conclusions In this paper we described Ptr-Net, a new architecture that allows us to learn a conditional probability of one sequence CP given another sequence P, where CP is a sequence of discrete tokens corresponding to positions in P. We show that Ptr-Nets can be used to learn solutions to three different combinatorial optimization problems. Our method works on variable sized inputs (yielding variable sized output dictionaries), something the baseline models (sequence-to-sequence with or without attention) cannot do directly. Even more impressively, they outperform the baselines on fixed input size problems - to which both the models can be applied. Previous methods such as RNNSearch, Memory Networks and Neural Turing Machines [5, 6? ] have used attention mechanisms to process inputs. However these methods do not directly address problems that arise with variable output dictionaries. We have shown that an attention mechanism can be applied to the output to solve such problems. In so doing, we have opened up a new class of problems to which neural networks can be applied without artificial assumptions. In this paper, we have applied this extension to RNNSearch, but the methods are equally applicable to Memory Networks and Neural Turing Machines. Future work will try and show its applicability to other problems such as sorting where the outputs are chosen from the inputs. We are also excited about the possibility of using this approach to other combinatorial optimization problems. Acknowledgments We would like to thank Rafal Jozefowicz, Ilya Sutskever, Quoc Le and Samy Bengio for useful discussions. We would also like to thank Daniel Gillick for his help with the final manuscript. 8 References [1] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems, pages 3104–3112, 2014. [2] Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint arXiv:1410.5401, 2014. [3] David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. Learning internal representations by error propagation. Technical report, DTIC Document, 1985. [4] Anthony J Robinson. An application of recurrent nets to phone probability estimation. Neural Networks, IEEE Transactions on, 5(2):298–305, 1994. [5] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. In ICLR 2015, arXiv preprint arXiv:1409.0473, 2014. [6] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. In ICLEAR 2015, arXiv preprint arXiv:1410.3916, 2014. [7] Alex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013. [8] Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammar as a foreign language. arXiv preprint arXiv:1412.7449, 2014. [9] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. Show and tell: A neural image caption generator. In CVPR 2015, arXiv preprint arXiv:1411.4555, 2014. [10] Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. Long-term recurrent convolutional networks for visual recognition and description. In CVPR 2015, arXiv preprint arXiv:1411.4389, 2014. [11] Wojciech Zaremba and Ilya Sutskever. Learning to execute. arXiv preprint arXiv:1410.4615, 2014. [12] Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997. [13] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhutdinov. Unsupervised learning of video representations using lstms. In ICML 2015, arXiv preprint arXiv:1502.04681, 2015. [14] Ray A Jarvis. On the identification of the convex hull of a finite set of points in the plane. Information Processing Letters, 2(1):18–21, 1973. [15] Ronald L. Graham. An efficient algorith for determining the convex hull of a finite planar set. Information processing letters, 1(4):132–133, 1972. [16] Franco P. Preparata and Se June Hong. Convex hulls of finite sets of points in two and three dimensions. Communications of the ACM, 20(2):87–93, 1977. [17] S1 Rebay. Efficient unstructured mesh generation by means of delaunay triangulation and bowyer-watson algorithm. Journal of computational physics, 106(1):125–138, 1993. [18] Richard Bellman. Dynamic programming treatment of the travelling salesman problem. Journal of the ACM (JACM), 9(1):61–63, 1962. [19] Suboptimal travelling salesman problem (tsp) solver. Available at https://github.com/dmishin/tsp-solver. [20] Traveling salesman problem c++ implementation. Available at https://github.com/samlbest/traveling-salesman. [21] C++ implementation of traveling salesman problem using christofides and 2-opt. Available at https://github.com/beckysag/traveling-salesman. 9 | 2015 | 67 |
5,958 | Fast and Accurate Inference of Plackett–Luce Models Lucas Maystre EPFL lucas.maystre@epfl.ch Matthias Grossglauser EPFL matthias.grossglauser@epfl.ch Abstract We show that the maximum-likelihood (ML) estimate of models derived from Luce’s choice axiom (e.g., the Plackett–Luce model) can be expressed as the stationary distribution of a Markov chain. This conveys insight into several recently proposed spectral inference algorithms. We take advantage of this perspective and formulate a new spectral algorithm that is significantly more accurate than previous ones for the Plackett–Luce model. With a simple adaptation, this algorithm can be used iteratively, producing a sequence of estimates that converges to the ML estimate. The ML version runs faster than competing approaches on a benchmark of five datasets. Our algorithms are easy to implement, making them relevant for practitioners at large. 1 Introduction Aggregating pairwise comparisons and partial rankings are important problems with applications in econometrics [1], psychometrics [2, 3], sports ranking [4, 5] and multiclass classification [6]. One possible approach to tackle these problems is to postulate a statistical model of discrete choice. In this spirit, Luce [7] stated the axiom of choice in a foundational work published over fifty years ago. Denote p(i | A) the probability of choosing item i when faced with alternatives in the set A. Given two items i and j, and any two sets of alternatives A and B containing i and j, the axiom posits that p(i | A) p(j | A) = p(i | B) p(j | B). (1) In other words, the odds of choosing item i over item j are independent of the rest of the alternatives. This simple assumption directly leads to a unique parametric choice model, known as the Bradley– Terry model in the case of pairwise comparisons, and the Plackett–Luce model in the generalized case of k-way rankings. In this paper, we highlight a connection between the maximum-likelihood (ML) estimate under these models and the stationary distribution of a Markov chain parameterized by the observed choices. Markov chains were already used in recent work [8, 9, 10] to aggregate pairwise comparisons and rankings. These approaches reduce the problem to that of finding a stationary distribution. By formalizing the link between the likelihood of observations under the choice model and a certain Markov chain, we unify these algorithms and explicate them from an ML inference perspective. We will also take a detour, and use this link in the reverse direction to give an alternative proof to a recent result on the error rate of the ML estimate [11], by using spectral analysis techniques. Beyond this, we make two contributions to statistical inference for this model. First, we develop a simple, consistent and computationally efficient spectral algorithm that is applicable to a wide range of models derived from the choice axiom. The exact formulation of the Markov chain used in the algorithm is distinct from related work [9, 10] and achieves a significantly better statistical efficiency at no additional computational cost. Second, we observe that with a small adjustment, the algorithm can be used iteratively, and it then converges to the ML estimate. An evaluation on five real-world datasets reveals that it runs consistently faster than competing approaches and has a much more predictable performance that does not depend on the structure of the data. The key step, finding a stationary distribution, can be offloaded to commonly available linear-algebra primitives, which 1 makes our algorithms scale well. Our algorithms are intuitively pleasing, simple to understand and implement, and they outperform the state of the art, hence we believe that they will be highly useful to practitioners. The rest of the paper is organized as follows. We begin by introducing some notations and presenting a few useful facts about the choice model and about Markov chains. By necessity, our exposition is succinct, and the reader is encouraged to consult Luce [7] and Levin et al. [12] for a more thorough exposition. In Section 2, we discuss related work. In Section 3, we present our algorithms, and in Section 4 we evaluate them on synthetic and real-world data. We conclude in Section 5. Discrete choice model. Denote by n the number of items. Luce’s axiom of choice implies that each item i ∈{1, . . . , n} can be parameterized by a positive strength πi ∈R>0 such that p(i | A) = πi/ P j∈A πj for any A containing i. The strengths π = [πi] are defined up to a multiplicative factor; for identifiability, we let P i πi = 1. An alternative parameterization of the model is given by θi = log(πi), in which case the model is sometimes referred to as conditional logit [1]. Markov chain theory. We represent a finite, stationary, continuous-time Markov chain by a directed graph G = (V, E), where V is the set of states and E is the set of transitions with positive rate. If G is strongly connected, the Markov chain is said to be ergodic and admits a unique stationary distribution π. The global balance equations relate the transition rates λij to the stationary distribution as follows: X j̸=i πiλij = X j̸=i πjλji ∀i. (2) The stationary distribution is therefore invariant to changes in the time scale, i.e., to a rescaling of the transition rates. In the supplementary file, we briefly discuss how to find π given [λij]. 2 Related work Spectral methods applied to ranking and scoring items from noisy choices have a long-standing history. To the best of our knowledge, Saaty [13] is the first to suggest using the leading eigenvector of a matrix of inconsistent pairwise judgments to score alternatives. Two decades later, Page et al. [14] developed PageRank, an algorithm that ranks Web pages according to the stationary distribution of a random walk on the hyperlink graph. In the same vein, Dwork et al. [8] proposed several variants of Markov chains for aggregating heterogeneous rankings. The idea is to construct a random walk that is biased towards high-ranked items, and use the ranking induced by the stationary distribution. More recently, Negahban et al. [9] presented Rank Centrality, an algorithm for aggregating pairwise comparisons close in spirit to that of [8]. When the data is generated under the Bradley–Terry model, this algorithm asymptotically recovers model parameters with only ω(n log n) pairwise comparisons. For the more general case of rankings under the Plackett–Luce model, Azari Soufiani et al. [10] propose to break rankings into pairwise comparisons and to apply an algorithm similar to Rank Centrality. They show that the resulting estimator is statistically consistent. Interestingly, many of these spectral algorithms can be related to the method of moments, a broadly applicable alternative to maximum-likelihood estimation. The history of algorithms for maximum-likelihood inference under Luce’s model goes back even further. In the special case of pairwise comparisons, the same iterative algorithm was independently discovered by Zermelo [15], Ford [16] and Dykstra [17]. Much later, this algorithm was explained by Hunter [18] as an instance of minorization-maximization (MM) algorithm, and extended to the more general choice model. Today, Hunter’s MM algorithm is the de facto standard for ML inference in Luce’s model. As the likelihood is concave, off-the-shelf optimization procedures such as the Newton-Raphson method can also be used, although they have been been reported to be slower and less practical [18]. Recently, Kumar et al. [19] looked at the problem of finding the transition matrix of a Markov chain, given its stationary distribution. The problem of inferring Luce’s model parameters from data can be reformulated in their framework, and the ML estimate is the solution to the inversion of the stationary distribution. Their work stands out as the first to link ML inference to Markov chains, albeit very differently from the way presented in our paper. Beyond algorithms, properties of the maximum-likelihood estimator in this model were studied extensively. Hajek et al. [11] consider the Plackett–Luce model for k-way rankings. They give an upper bound to the estimation error and show that the ML estimator is minimax-optimal. In summary, they show that only ω(n/k log n) samples are enough to drive the mean-square error down to zero, as n increases. Rajkumar and Agarwal [20] 2 consider the Bradley–Terry model for pairwise comparisons. They show that the ML estimator is able to recover the correct ranking, even when the data is generated as per another model, e.g., Thurstone’s [2], as long as a so-called low-noise condition is satisfied. We also mention that as an alternative to likelihood maximization, Bayesian inference has also been proposed. Caron and Doucet [21] present a Gibbs sampler, and Guiver and Snelson [22] propose an approximate inference algorithm based on expectation propagation. In this work, we provide a unifying perspective on recent advances in spectral algorithms [9, 10] from a maximum-likelihood estimation perspective. It turns out that this perspective enables us to make contributions on both sides: On the one hand, we develop an improved and more general spectral ranking algorithm, and on the other hand, we propose a faster procedure for ML inference by using this algorithm iteratively. 3 Algorithms We begin by expressing the ML estimate under the choice model as the stationary distribution of a Markov chain. We then take advantage of this formulation to propose novel algorithms for model inference. Although our derivation is made in the general choice model, we will also discuss implications for the special cases of pairwise data in Section 3.3 and k-way ranking data in Section 3.4. Suppose that we collect d independent observations in the multiset D = {(cℓ, Aℓ) | ℓ= 1, . . . , d}. Each observation consists of a choice cℓamong a set of alternatives Aℓ; we say that i wins over j and denote by i ≻j whenever i, j ∈A and cℓ= i. We define the directed comparison graph as GD = (V, E), with V = {1, . . . , n} and (j, i) ∈E if i wins at least once over j in D. In order to ensure that the ML estimate is well-defined, we make the standard assumption that GD is strongly connected [16, 18]. In practice, if this assumption does not hold, we can consider each strongly connected component separately. 3.1 ML estimate as a stationary distribution For simplicity, we denote the model parameter associated with item cℓby πℓ. The log-likelihood of parameters π given observations D is log L(π | D) = d X ℓ=1 log πℓ−log X j∈Aℓ πj . (3) For each item, we define two sets of indices. Let Wi .= {ℓ| i ∈Aℓ, cℓ= i} and Li .= {ℓ| i ∈Aℓ, cℓ̸= i} be the indices of the observations where item i wins over and loses against the alternatives, respectively. The log-likelihood function is strictly concave and the model admits a unique ML estimate ˆπ. The optimality condition ∇ˆπ log L = 0 implies ∂log L ∂ˆπi = X ℓ∈Wi 1 ˆπi − 1 P j∈Aℓˆπj ! − X ℓ∈Li 1 P j∈Aℓˆπj = 0 ∀i (4) ⇐⇒ X j̸=i X ℓ∈Wi∩Lj ˆπj P t∈Aℓˆπt − X ℓ∈Wj∩Li ˆπi P t∈Aℓˆπt = 0 ∀i. (5) In order to go from (4) to (5), we multiply by ˆπi and rearrange the terms. To simplify the notation, let us further introduce the function f(S, π) .= X A∈S 1 P i∈A πi , (6) which takes observations S ⊆D and an instance of model parameters π, and returns a non-negative real number. Let Di≻j .= {(cℓ, Aℓ) ∈D | ℓ∈Wi ∩Lj}, i.e., the set of observations where i wins over j. Then (5) can be rewritten as X j̸=i ˆπi · f(Dj≻i, ˆπ) = X j̸=i ˆπj · f(Di≻j, ˆπ) ∀i. (7) 3 Algorithm 1 Luce Spectral Ranking Require: observations D 1: λ ←0n×n 2: for (i, A) ∈D do 3: for j ∈A \ {i} do 4: λji ←λji + n/|A| 5: end for 6: end for 7: ¯π ←stat. dist. of Markov chain λ 8: return ¯π Algorithm 2 Iterative Luce Spectral Ranking Require: observations D 1: π ←[1/n, . . . , 1/n]⊺ 2: repeat 3: λ ←0n×n 4: for (i, A) ∈D do 5: for j ∈A \ {i} do 6: λji ←λji + 1/ P t∈A πt 7: end for 8: end for 9: π ←stat. dist. of Markov chain λ 10: until convergence This formulation conveys a new viewpoint on the ML estimate. It is easy to recognize the global balance equations (2) of a Markov chain on n states (representing the items), with transition rates λji = f(Di≻j, ˆπ) and stationary distribution ˆπ. These transition rates have an interesting interpretation: f(Di≻j, π) is the count of how many times i wins over j, weighted by the strength of the alternatives. At this point, it is useful to observe that for any parameters π, f(Di≻j, π) ≥1 if (j, i) ∈E, and 0 otherwise. Combined with the assumption that GD is strongly connected, it follows that any π parameterizes the transition rates of an ergodic (homogeneous) Markov chain. The ergodicity of the inhomogeneous Markov chain, where the transition rates are constantly updated to reflect the current distribution over states, is shown by the following theorem. Theorem 1. The Markov chain with inhomogeneous transition rates λji = f(Di≻j, π) converges to the maximum-likelihood estimate ˆπ, for any initial distribution in the open probability simplex. Proof (sketch). By (7), ˆπ is the unique invariant distribution of the Markov chain. In the supplementary file, we look at an equivalent uniformized discrete-time chain. Using the contraction mapping principle, one can show that this chain converges to the invariant distribution. 3.2 Approximate and exact ML inference We approximate the Markov chain described in (7) by considering a priori that all alternatives have equal strength. That is, we set the transition rates λji .= f(Di≻j, π) by fixing π to [1/n, . . . , 1/n]⊺. For i ̸= j, the contribution of i winning over j to the rate of transition λji is n/|A|. In other words, for each observation, the winning item is rewarded by a fixed amount of incoming rate that is evenly split across the alternatives (the chunk allocated to itself is discarded.) We interpret the stationary distribution ¯π as an estimate of model parameters. Algorithm 1 summarizes this procedure, called Luce Spectral Ranking (LSR.) If we consider a growing number of observations, LSR converges to the true model parameters π∗, even in the restrictive case where the sets of alternatives are fixed. Theorem 2. Let A = {Aℓ} be a collection of sets of alternatives such that for any partition of A into two non-empty sets S and T, (∪A∈SA) ∩(∪A∈T A) ̸= ∅1. Let dℓbe the number of choices observed over alternatives Aℓ. Then ¯π →π∗as dℓ→∞∀ℓ. Proof (sketch). The condition on A ensures that asymptotically GD is strongly connected. Let d →∞be a shorthand for dℓ→∞∀ℓ. We can show that if items i and j are compared in at least one set of alternatives, the ratio of transition rates satisfies limd→∞λij/λji = π∗ j /π∗ i . It follows that in the limit of d →∞, the stationary distribution is π∗. A rigorous proof is given in the supplementary file. Starting from the LSR estimate, we can iteratively refine the transition rates of the Markov chain and obtain a sequence of estimates. By (7), the only fixed point of this iteration is the ML estimate ˆπ. We call this procedure I-LSR and describe it in Algorithm 2. LSR (or one iteration of I-LSR) entails (a) filling a matrix of (weighted) pairwise counts and (b) finding a stationary distribution. Let D .= P ℓ|Aℓ|, and let S be the running time of finding the stationary distribution. Then LSR has running time O(D + S). As a comparison, one iteration of 1 This is equivalent to stating that the hypergraph H = (V, A) is connected. 4 the MM algorithm [18] is O(D). Finding the stationary distribution can be implemented in different ways. For example, in a sparse regime where D ≪n2, the stationary distribution can be found with the power method in a few O(D) sparse matrix multiplications. In the supplementary file, we give more details about possible implementations. In practice, whether D or S turns out to be dominant in the running time is not a foregone conclusion. 3.3 Aggregating pairwise comparisons A widely-used special case of Luce’s choice model occurs when all sets of alternatives contain exactly two items, i.e., when the data consists of pairwise comparisons. This model was proposed by Zermelo [15], and later by Bradley and Terry [3]. As the stationary distribution is invariant to changes in the time-scale, we can rescale the transition rates and set λji .= |Di≻j| when using LSR on pairwise data. Let S be the set containing the pairs of items that have been compared at least once. In the case where each pair (i, j) ∈S has been compared exactly p times, LSR is strictly equivalent to a continuous-time Markov-chain formulation of Rank Centrality [9]. In fact, our derivation justifies Rank Centrality as an approximate ML inference algorithm for the Bradley–Terry model. Furthermore, we provide a principled extension of Rank Centrality to the case where the number of comparisons observed is unbalanced. Rank Centrality considers transition rates proportional to the ratio of wins, whereas (7) justifies making transition rates proportional to the count of wins. Negahban et al. [9] also provide an upper bound on the error rate of Rank Centrality, which essentially shows that it is minimax-optimal. Because the two estimators are equivalent in the setting of balanced pairwise comparisons, the bound also applies to LSR. More interestingly, the expression of the ML estimate as a stationary distribution enables us to reuse the same analytical techniques to bound the error of the ML estimate. In the supplementary file, we therefore provide an alternative proof of the recent result of Hajek et al. [11] on the minimax-optimality of the ML estimate. 3.4 Aggregating partial rankings Another case of interest is when observations do not consist of only a single choice, but of a ranking over the alternatives. We now suppose m observations consisting of k-way rankings, 2 ≤k ≤n. For conciseness, we suppose that k is the same for all observations. Let one such observation be σ(1) ≻. . . ≻σ(k), where σ(p) is the item with p-th rank. Luce [7] and later Plackett [4] independently proposed a model of rankings where P (σ(1) ≻. . . ≻σ(k)) = k Y r=1 πσ(r) Pk p=r πσ(p) . (8) In this model, a ranking can be interpreted as a sequence of k −1 independent choices: Choose the first item, then choose the second among the remaining alternatives, etc. With this point of view in mind, LSR and I-LSR can easily accommodate data consisting of k-way rankings, by decomposing the m observations into d = m(k −1) choices. Azari Soufiani et al. [10] provide a class of consistent estimators for the Plackett–Luce model, using the idea of breaking rankings into pairwise comparisons. Although they explain their algorithms from a generalized-method-of-moments perspective, it is straightforward to reinterpret their estimators as stationary distributions of particular Markov chains. In fact, for k = 2, their algorithm GMM-F is identical to LSR. When k > 2 however, breaking a ranking into k 2 pairwise comparisons implicitly makes the (incorrect) assumption that these comparisons are statistically independent. The Markov chain that LSR builds breaks rankings into pairwise rate contributions, but weights the contributions differently depending on the rank of the winning item. In Section 4, we show that this weighting turns out to be crucial. Our approach yields a significant improvement in statistical efficiency, yet keeps the same attractive computational cost and ease of use. 3.5 Applicability to other models Several other variants and extensions of Luce’s choice model have been proposed. For example, Rao and Kupper [23] extend the Bradley–Terry model to the case where a comparison between two items can result in a tie. In the supplementary file, we show that the ML estimate in the Rao–Kupper model can also be formulated as a stationary distribution, and we provide corresponding adaptations of LSR 5 and I-LSR. We believe that our algorithms can be generalized to further models that are based on the choice axiom. However, this axiom is key, and other choice models (such as Thurstone’s [2]) do not admit the stationary-distribution interpretation we derive here. 4 Experimental evaluation In this section, we compare LSR and I-LSR to other inference algorithms in terms of (a) statistical efficiency, and (b) empirical performance. In order to understand the efficiency of the estimators, we generate synthetic data from a known ground truth. Then, we look at five real-world datasets and investigate the practical performance of the algorithms in terms of accuracy, running time and convergence rate. Error metric. As the probability of i winning over j depends on the ratio of strengths πi/πi, the strengths are typically logarithmically spaced. In order to evaluate the accuracy of an estimate π to ground truth parameters π∗, we therefore use a log transformation, reminiscent of the random-utilitytheoretic formulation of the choice model [1, 11]. Define θ .= [log πi −t], with t chosen such that P i θi = 0. We will consider the root-mean-squared error (RMSE) ERMS = ∥θ −θ∗∥2/√n. (9) 4.1 Statistical efficiency To assess the statistical efficiency of LSR and other algorithms, we follow the experimental procedure of Hajek et al. [11]. We consider n = 1024 items, and draw θ∗uniformly at random in [−2, 2]n. We generate d = 64 full rankings over the n items from a Plackett-Luce model parameterized with π ∝[eθi]. For a given k ∈{21, . . . , 210}, we break down each of the full rankings as follows. First, we partition the items into n/k subsets of size k uniformly at random. Then, we store the k-way rankings induced by the full ranking on each of those subsets. As a result, we obtain m = dn/k statistically independent k-way partial rankings. For a given estimator, this data produces an estimate θ, for which we record the root-mean-square error to θ∗. We consider four estimators. The first two (LSR and ML) work on the ranking data directly. The remaining two follow Azari Soufiani et al. [10], who suggest breaking down k-way rankings into k 2 pairwise comparisons. These comparisons are then used by LSR, resulting in Azari Soufiani et al.’s GMM-F estimator, and by an ML estimator (ML-F.) In short, the four estimators vary according to (a) whether they use as-is rankings or derived comparisons, and (b) whether the model is fitted using an approximate spectral algorithm or using exact maximum likelihood. Figure 1 plots ERMS for increasing sizes of partial rankings, as well as a lower bound to the error of any estimator for the Plackett-Luce model (see Hajek et al. [11] for details.) We observe that breaking the rankings into pairwise comparisons (*-F estimators) incurs a significant efficiency loss over using the k-way rankings directly (LSR and ML.) We conclude that by correctly weighting pairwise rates in the Markov chain, LSR distinctly outperforms the rank-breaking approach as k increases. We also observe that the ML estimate is always more efficient. Spectral estimators such as LSR provide a quick, asymptotically consistent estimate of parameters, but this observation justifies calling them approximate inference algorithms. 4.2 Empirical performance We investigate the performance of various inference algorithms on five real-world datasets. The NASCAR [18] and sushi [24] datasets contain multiway partial rankings. The YouTube, GIFGIF and chess datasets2 contain pairwise comparisons. Among those, the chess dataset is particular in that it features 45% of ties; in this case we use the extension of the Bradley–Terry model proposed by Rao and Kupper [23]. We preprocess each dataset by discarding items that are not part of the largest strongly connected component in the comparison graph. The number of items n, the number of rankings m, as well as the size of a partial rankings k for each dataset are given in Table 1. Additional details on the experimental setup are given in the supplementary material. We first compare the estimates produced by three approximate ML inference algorithms, LSR, GMM-F and Rank Centrality (RC.) Note that RC applies only to pairwise comparisons, and that LSR is the only 2 See https://archive.ics.uci.edu/ml/machine-learning-databases/00223/, http://www.gif.gf/ and https://www.kaggle.com/c/chess. 6 21 22 23 24 25 26 27 28 29 210 k 0.1 0.2 0.4 RMSE lower bound ML-F GMM-F ML LSR Figure 1: Statistical efficiency of different estimators for increasing sizes of partial rankings. As k grows, breaking rankings into pairwise comparisons becomes increasingly inefficient. LSR remains efficient at no additional computational cost. algorithm able to infer the parameters in the Rao-Kupper model. Also note that in the case of pairwise comparisons, GMM-F and LSR are strictly equivalent. In Table 1, we report the root-mean-square deviation to the ML estimate ˆθ and the running time T of the algorithm. Table 1: Performance of approximate ML inference algorithms LSR GMM-F RC Dataset n m k ERMS T [s] ERMS T [s] ERMS T [s] NASCAR 83 36 43 0.194 0.03 0.751 0.06 — — Sushi 100 5 000 10 0.034 0.22 0.130 0.19 — — YouTube 16 187 1 128 704 2 0.417 34.18 0.417 34.18 0.432 41.91 GIFGIF 5 503 95 281 2 1.286 1.90 1.286 1.90 1.295 2.84 Chess 6 174 63 421 2 0.420 2.90 — — — — The smallest value of ERMS is highlighted in bold for each dataset. We observe that in the case of multiway partial rankings, LSR is almost four times more accurate than GMM-F on the datasets considered. In the case of pairwise comparisons, RC is slightly worse than LSR and GMM-F, because the number of comparisons per pair is not homogeneous (see Section 3.3.) The running time of the three algorithms is comparable. Next, we turn our attention to ML inference and consider three iterative algorithms: I-LSR, MM and Newton-Raphson. For Newton-Raphson, we use an off-the-shelf solver. Each algorithm is initialized with π(0) = [1/n, . . . , 1/n]⊺, and convergence is declared when ERMS < 0.01. In Table 2, we report the number of iterations I needed to reach convergence, as well as the total running time T of the algorithm. Table 2: Performance of iterative ML inference algorithms. I-LSR MM Newton Dataset γD I T [s] I T [s] I T [s] NASCAR 0.832 3 0.08 4 0.10 — — Sushi 0.890 2 0.42 4 1.09 3 10.45 YouTube 0.002 12 414.44 8 680 22 443.88 — — GIFGIF 0.408 10 22.31 119 109.62 5 72.38 Chess 0.007 15 43.69 181 55.61 3 49.37 The smallest total running time T is highlighted in bold for each dataset. We observe that NewtonRaphson does not always converge, despite the log-likelihood being strictly concave3. I-LSR consis3 On the NASCAR dataset, this has also been noted by Hunter [18]. Computing the Newton step appears to be severely ill-conditioned for many real-world datasets. We believe that it can be addressed by a careful choice 7 tently outperforms MM and Newton-Raphson in running time. Even if the average running time per iteration is in general larger than that of MM, it needs considerably fewer iterations: For the YouTube dataset, I-LSR yields an increase in speed of over 50 times. The slow convergence of minorization-maximization algorithms is known [18], yet the scale of the issue and its apparent unpredictability is surprising. In Hunter’s MM algorithm, updating a given πi involves only parameters of items to which i has been compared. Therefore, we speculate that the convergence rate of MM is dependent on the expansion properties of the comparison graph GD. As an illustration, we consider the sushi dataset. To quantify the expansion properties, we look at the spectral gap γD of a simple random walk on GD; intuitively, the larger the spectral gap is, the better the expansion properties are [12]. The original comparison graph is almost complete, and γD = 0.890. By breaking each 10-way ranking into 5 independent pairwise comparisons, we effectively sparsify the comparison graph. As a result, the spectral gap decreases to 0.784. In Figure 2, we show the convergence rate of MM and I-LSR for the original (k = 10) and modified (k = 2) datasets. We observe that both algorithms display linear convergence, however the rate at which MM converges appears to be sensitive to the structure of the comparison graph. In contrast, I-LSR is robust to changes in the structure. The spectral gap of each dataset is listed in Table 2. 1 2 3 4 5 6 7 8 9 10 iteration 100 10−2 10−4 10−6 10−8 10−10 10−12 RMSE MM, k = 10 MM, k = 2 I-LSR, k = 10 I-LSR, k = 2 Figure 2: Convergence rate of I-LSR and MM on the sushi dataset. When partial rankings (k = 10) are broken down into independent comparisons (k = 2), the comparison graph becomes sparser. I-LSR is robust to this change, whereas the convergence rate of MM significantly decreases. 5 Conclusion In this paper, we develop a stationary-distribution perspective on the maximum-likelihood estimate of Luce’s choice model. This perspective explains and unifies several recent spectral algorithms from an ML inference point of view. We present our own spectral algorithm that works on a wider range of data, and show that the resulting estimate significantly outperforms previous approaches in terms of accuracy. We also show that this simple algorithm, with a straighforward adaptation, can produce a sequence of estimates that converge to the ML estimate. On real-world datasets, our ML algorithm is always faster than the state of the art, at times by up to two orders of magnitude. Beyond statistical and computational performance, we believe that a key strength of our algorithms is that they are simple to implement. As an example, our implementation of LSR fits in ten lines of Python code. The most complex operation—finding a stationary distribution—can be readily offloaded to commonly available and highly optimized linear-algebra primitives. As such, we believe that our work is very useful for practitioners. Acknowledgments We thank Holly Cogliati-Bauereis, Ksenia Konyushkova and Brunella Spinelli for careful proofreading and comments on the text. of starting point, step size, or by monitoring the numerical stability; however, these modifications are non-trivial and impose an additional burden on the practitioner. 8 References [1] D. McFadden. Conditional logit analysis of qualitative choice behavior. In P. Zarembka, editor, Frontiers in Econometrics, pages 105–142. Academic Press, 1973. [2] L. Thurstone. The method of paired comparisons for social values. Journal of Abnormal and Social Psychology, 21(4):384–400, 1927. [3] R. A. Bradley and M. E. Terry. Rank Analysis of Incomplete Block Designs: I. The Method of Paired Comparisons. Biometrika, 39(3/4):324–345, 1952. [4] R. L. Plackett. The Analysis of Permutations. Journal of the Royal Statistical Society, Series C (Applied Statistics), 24(2):193–202, 1975. [5] A. Elo. The Rating Of Chess Players, Past & Present. Arco, 1978. [6] T. Hastie and R. Tibshirani. Classification by pairwise coupling. The Annals of Statistics, 26(2):451–471, 1998. [7] R. D. Luce. Individual Choice behavior: A Theoretical Analysis. Wiley, 1959. [8] C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank Aggregation Methods for the Web. In Proceedings of the 10th international conference on World Wide Web (WWW 2001), Hong Kong, China, 2001. [9] S. Negahban, S. Oh, and D. Shah. Iterative Ranking from Pair-wise Comparisons. In Advances in Neural Information Processing Systems 25 (NIPS 2012), Lake Tahoe, CA, 2012. [10] H. Azari Soufiani, W. Z. Chen, D. C. Parkes, and L. Xia. Generalized Method-of-Moments for Rank Aggregation. In Advances in Neural Information Processing Systems 26 (NIPS 2013), Lake Tahoe, CA, 2013. [11] B. Hajek, S. Oh, and J. Xu. Minimax-optimal Inference from Partial Rankings. In Advances in Neural Information Processing Systems 27 (NIPS 2014), Montreal, QC, Canada, 2014. [12] D. A. Levin, Y. Peres, and E. L. Wilmer. Markov Chains and Mixing Times. American Mathematical Society, 2008. [13] T. L. Saaty. The Analytic Hierarchy Process: Planning, Priority Setting, Resource Allocation. McGraw-Hill, 1980. [14] L. Page, S. Brin, R. Motwani, and T. Winograd. The PageRank Citation Ranking: Bringing Order to the Web. Technical report, Stanford University, 1998. [15] E. Zermelo. Die Berechnung der Turnier-Ergebnisse als ein Maximumproblem der Wahrscheinlichkeitsrechnung. Mathematische Zeitschrift, 29(1):436–460, 1928. [16] L. R. Ford, Jr. Solution of a Ranking Problem from Binary Comparisons. The American Mathematical Monthly, 64(8):28–33, 1957. [17] O. Dykstra, Jr. Rank Analysis of Incomplete Block Designs: A Method of Paired Comparisons Employing Unequal Repetitions on Pairs. Biometrics, 16(2):176–188, 1960. [18] D. R. Hunter. MM algorithms for generalized Bradley–Terry models. The Annals of Statistics, 32(1): 384–406, 2004. [19] R. Kumar, A. Tomkins, S. Vassilvitskii, and E. Vee. Inverting a Steady-State. In Proceedings of the 8th International Conference on Web Search and Data Mining (WSDM 2015), pages 359–368, 2015. [20] A. Rajkumar and S. Agarwal. A Statistical Convergence Perspective of Algorithms for Rank Aggregation from Pairwise Data. In Proceedings of the 31st International Conference on Machine Learning (ICML 2014), Beijing, China, 2014. [21] F. Caron and A. Doucet. Efficient Bayesian Inference for Generalized Bradley–Terry models. Journal of Computational and Graphical Statistics, 21(1):174–196, 2012. [22] J. Guiver and E. Snelson. Bayesian inference for Plackett–Luce ranking models. In Proceedings of the 26th International Conference on Machine Learning (ICML 2009), Montreal, Canada, 2009. [23] P. V. Rao and L. L. Kupper. Ties in Paired-Comparison Experiments: A Generalization of the Bradley-Terry Model. Journal of the American Statistical Association, 62(317):194–204, 1967. [24] T. Kamishima and S. Akaho. Efficient Clustering for Orders. In Mining Complex Data, pages 261–279. Springer, 2009. 9 | 2015 | 68 |
5,959 | Learning Bayesian Networks with Thousands of Variables Mauro Scanagatta IDSIA∗, SUPSI†, USI‡ Lugano, Switzerland mauro@idsia.ch Cassio P. de Campos Queen’s University Belfast Northern Ireland, UK c.decampos@qub.ac.uk Giorgio Corani IDSIA∗, SUPSI†, USI‡ Lugano, Switzerland giorgio@idsia.ch Marco Zaffalon IDSIA∗ Lugano, Switzerland zaffalon@idsia.ch Abstract We present a method for learning Bayesian networks from data sets containing thousands of variables without the need for structure constraints. Our approach is made of two parts. The first is a novel algorithm that effectively explores the space of possible parent sets of a node. It guides the exploration towards the most promising parent sets on the basis of an approximated score function that is computed in constant time. The second part is an improvement of an existing ordering-based algorithm for structure optimization. The new algorithm provably achieves a higher score compared to its original formulation. Our novel approach consistently outperforms the state of the art on very large data sets. 1 Introduction Learning the structure of a Bayesian network from data is NP-hard [2]. We focus on score-based learning, namely finding the structure which maximizes a score that depends on the data [9]. Several exact algorithms have been developed based on dynamic programming [12, 17], branch and bound [7], linear and integer programming [4, 10], shortest-path heuristic [19, 20]. Usually structural learning is accomplished in two steps: parent set identification and structure optimization. Parent set identification produces a list of suitable candidate parent sets for each variable. Structure optimization assigns a parent set to each node, maximizing the score of the resulting structure without introducing cycles. The problem of parent set identification is unlikely to admit a polynomial-time algorithm with a good quality guarantee [11]. This motivates the development of effective search heuristics. Usually however one decides the maximum in-degree (number of parents per node) k and then simply computes the score of all parent sets. At that point one performs structural optimization. An exception is the greedy search of the K2 algorithm [3], which has however been superseded by the more modern approaches mentioned above. A higher in-degree implies a larger search space and allows achieving a higher score; however it also requires higher computational time. When choosing the in-degree the user makes a trade-off between these two objectives. However when the number of variables is large, the in-degree is ∗Istituto Dalle Molle di studi sull’Intelligenza Artificiale (IDSIA) †Scuola universitaria professionale della Svizzera italiana (SUPSI) ‡Universit`a della Svizzera italiana (USI) 1 generally set to a small value, to allow the optimization to be feasible. The largest data set analyzed in [1] with the Gobnilp1 software contains 413 variables; it is analyzed setting k = 2. In [5] Gobnilp is used for structural learning with 1614 variables, setting k = 2. These are among the largest examples of score-based structural learning in the literature. In this paper we propose an algorithm that performs approximated structure learning with thousands of variables without constraints on the in-degree. It is constituted by a novel approach for parent set identification and a novel approach for structure optimization. As for parent set identification we propose an anytime algorithm that effectively explores the space of possible parent sets. It guides the exploration towards the most promising parent sets, exploiting an approximated score function that is computed in constant time. As for structure optimization, we extend the ordering-based algorithm of [18], which provides an effective approach for model selection with reduced computational cost. Our algorithm is guaranteed to find a solution better than or equal to that of [18]. We test our approach on data sets containing up to ten thousand variables. As a performance indicator we consider the score of the network found. Our parent set identification approach outperforms consistently the usual approach of setting the maximum in-degree and then computing the score of all parent sets. Our structure optimization approach outperforms Gobnilp when learning with more than 500 nodes. All the software and data sets used in the experiments are available online. 2. 2 Structure Learning of Bayesian Networks Consider the problem of learning the structure of a Bayesian Network from a complete data set of N instances D = {D1, ..., DN}. The set of n categorical random variables is X = {X1, ..., Xn}. The goal is to find the best DAG G = (V, E), where V is the collection of nodes and E is the collection of arcs. E can be defined as the set of parents Π1, ..., Πn of each variable. Different scores can be used to assess the fit of a DAG. We adopt the BIC, which asymptotically approximates the posterior probability of the DAG. The BIC score is decomposable, namely it is constituted by the sum of the scores of the individual variables: BIC(G) = = Xn i=1 BIC(Xi, Πi) = Xn i=1 X π∈|Πi| X x∈|Xi| Nx,π log ˆθx|π −log N 2 (|Xi| −1)(|Πi|) , where ˆθx|π is the maximum likelihood estimate of the conditional probability P(Xi = x|Πi = π), and Nx,π represents the number of times (X = x∧Πi = π) appears in the data set, and |·| indicates the size of the Cartesian product space of the variables given as arguments (instead of the number of variables) such that |Xi| is the number of states of Xi and |∅| = 1. Exploiting decomposability, we first identify independently for each variable a list of candidate parent sets (parent set identification). Then by structure optimization we select for each node the parent set that yields the highest score without introducing cycles. 3 Parent set identification For parent set identification usually one explores all the possible parent sets, whose number however increases as O(nk), where k denotes the maximum in-degree. Pruning rules [7] do not considerably reduce the size of this space. Usually the parent sets are explored in sequential order: first all the parent size of size one, then all the parent sets of size two, and so on, up to size k. We refer to this approach as sequential ordering. If the solver adopted for structural optimization is exact, this strategy allows to find the globally optimum graph given the chosen value of k. In order to deal with a large number of variables it is however necessary setting a low in-degree k. For instance [1] adopts k=2 when dealing with the largest data set (diabetes), which contains 413 variables. In [5] Gobnilp is used for structural learning with 1614 variables, again setting k = 2. A higher value of k would make the structural 1http://www.cs.york.ac.uk/aig/sw/gobnilp/ 2http://blip.idsia.ch 2 learning not feasible. Yet a low k implies dropping all the parent sets with size larger than k. Some of them possibly have a high score. In [18] it is proposed to adopt the subset Πcorr of the most correlated variables with the children variable. Then [18] consider only parent sets which are subsets of Πcorr. However this approach is not commonly adopted, possibly because it requires specifying the size of Πcorr. Indeed [18] acknowledges the need for further innovative approaches in order to effectively explore the space of the parent sets. We propose two anytime algorithms to address this problem. The first is the simplest; we call it greedy selection. It starts by exploring all the parent sets of size one and adding them to a list. Then it repeats the following until time is expired: pops the best scoring parent set Π from the list, explores all the supersets obtained by adding one variable to Π, and adds them to the list. Note that in general the parent sets chosen at two adjoining step are not related to each other. The second approach (independence selection) adopts a more sophisticated strategy, as explained in the following. 3.1 Parent set identification by independence selection Independence selection uses an approximation of the actual BIC score of a parent set Π, which we denote as BIC∗, to guide the exploration of the space of the parent sets. The BIC∗of a parent set constituted by the union of two non-empty parent sets Π1 and Π2 is defined as follows: BIC∗(X, Π1, Π2) = BIC(X, Π1) + BIC(X, Π2) + inter(X, Π1, Π2) , (1) with Π1∪Π2 = Π and inter(X, Π1, Π2) = log N 2 (|X|−1)(|Π1|+|Π2|−|Π1||Π2|−1)−BIC(X, ∅). If we already know BIC(X, Π1) and BIC(X, Π2) from previous calculations (and we know BIC(X, ∅)), then BIC∗can be computed in constant time (with respect to data accesses). We thus exploit BIC∗to quickly estimate the score of a large number of candidate parent sets and to decide the order to explore them. We provide a bound for the difference between BIC∗(X, Π1, Π2) and BIC(X, Π1 ∪Π2). To this end, we denote by ii the Interaction Information [14]: ii(X; Y ; Z) = I(X; Y |Z) −I(X; Y ), namely the difference between the mutual information of X and Y conditional on Z and the unconditional mutual information of X and Y . Theorem 1. Let X be a node of G and Π = Π1 ∪Π2 be a parent set for X with Π1 ∩Π2 = ∅ and Π1, Π2 non-empty. Then BIC(X, Π) = BIC∗(X, Π1, Π2) + N · ii(Π1; Π2; X), where ii is the Interaction Information estimated from data. Proof. BIC(X, Π1 ∪Π2) −BIC∗(X, Π1, Π2) = BIC(X, Π1 ∪Π2) −BIC(X, Π1) −BIC(X, Π2) −inter(X, Π1, Π2) = X x,π1,π2 Nx,π1,π2 h log ˆθx|π1,π2 −log(ˆθx|π1 ˆθx|π2) i + X x Nx log ˆθx = X x,π1,π2 Nx,π1,π2 " log ˆθx|π1,π2 −log ˆθx|π1 ˆθx|π2 ˆθx !# = X x,π1,π2 Nx,π1,π2 log ˆθx|π1,π2 ˆθx ˆθx|π1 ˆθx|π2 ! = X x,π1,π2 N · ˆθx,π1,π2 log ˆθπ1,π2|xˆθπ1 ˆθπ2 ˆθπ1|xˆθπ2|xˆθπ1,π2 ! = N X x,π1,π2 ˆθx,π1,π2 log ˆθπ1,π2|x ˆθπ1|xˆθπ2|x ! − X π1,π2 ˆθπ1,π2 log ˆθπ1,π2 ˆθπ1 ˆθπ2 !! = N · (I(Π1; Π2|X) −I(Π1; Π2)) = N · ii(Π1; Π2; X) , where I(·) denotes the (conditional) mutual information estimated from data. Corollary 1. Let X be a node of G, and Π = Π1 ∪Π2 be a parent set of X such that Π1 ∩Π2 = ∅ and Π1, Π2 non-empty. Then |BIC(X, Π) −BIC∗(X, Π1, Π2)| ≤N min{H(X), H(Π1), H(Π2)} . Proof. Theorem 1 states that BIC(X, Π) = BIC∗(X, Π1, Π2) + N · ii(Π1; Π2; X). We now devise bounds for interaction information, recalling that mutual information and conditional mutual information are always non-negative and achieve their maximum value at the smallest entropy H of their 3 argument: −H(Π2) ≤−I(Π1; Π2) ≤ii(Π1; Π2; X) ≤I(Π1; Π2|X) ≤H(Π2). The theorem is proven by simply permuting the values Π1; Π2; X in the ii of such equation. Since ii(Π1; Π2; X) = I(Π1; Π2|X)−I(Π1; Π2) = I(X; Π1|Π2)−I(X; Π1) = I(Π2; X|Π1)−I(Π2; X) , the bounds for ii are valid. We know that 0 ≤H(Π) ≤log(|Π|) for any set of nodes Π, hence the result of Corollary 1 could be further manipulated to achieve a bound for the difference between BIC and BIC∗of at most N log(min{|X|, |Π1|, |Π2|}). However, Corollary 1 is stronger and can still be computed efficiently as follows. When computing BIC∗(X, Π1, Π2), we assumed that BIC(X, Π1) and BIC(X, Π2) had been precomputed. As such, we can also have precomputed the values H(Π1) and H(Π2) at the same time as the BIC scores were computed, without any significant increase of complexity (when computing BIC(X, Π) for a given Π, just use the same loop over the data to compute H(Π)). Corollary 2. Let X be a node of G, and Π = Π1∪Π2 be a parent set for that node with Π1∩Π2 = ∅ and Π1, Π2 non-empty. If Π1 ⊥⊥Π2, then BIC(X, Π1 ∪Π2) ≥BIC∗(X, Π1 ∪Π2). If Π1 ⊥⊥Π2 |X, then BIC(X, Π1 ∪Π2) ≤BIC∗(X, Π1 ∪Π2). If the interaction information ii(Π1; Π2; X) = 0, then BIC(X, Π1 ∪Π2) = BIC∗(X, Π1, Π2). Proof. It follows from Theorem 1 considering that mutual information I(Π1, Π2) = 0 if Π1 and Π2 are independent, while I(Π1, Π2|X) = 0 if Π1 and Π2 are conditionally independent. We now devise a novel pruning strategy for BIC based on the bounds of Corollaries 1 and 2. Theorem 2. Let X be a node of G, and Π = Π1 ∪Π2 be a parent set for that node with Π1 ∩ Π2 = ∅and Π1, Π2 non-empty. Let Π′ ⊃Π. If BIC∗(X, Π1, Π2) + log N 2 (|X| −1)|Π′| > N min{H(X), H(Π1), H(Π2)}, then Π′ and its supersets are not optimal and can be ignored. Proof. BIC∗(X, Π1, Π2) −N min{H(X), H(Π1), H(Π2)} + log N 2 (|X| −1)|Π′| > 0 implies BIC(Π) + log N 2 (|X| −1)|Π′| > 0, and Theorem 4 of [6] prunes Π′ and all its supersets. Thus we can efficiently check whether large parts of the search space can be discarded based on these results. We note that Corollary 1 and hence Theorem 2 are very generic in the choice of Π1 and Π2, even though usually one of them is taken as a singleton. 3.2 Independence selection algorithm We now describe the algorithm that exploits the BIC∗score in order to effectively explore the space of the parent sets. It uses two lists: (1) open: a list for the parent sets to be explored, ordered by their BIC∗score; (2) closed: a list of already explored parent sets, along with their actual BIC score. The algorithm starts with the BIC of the empty set computed. First it explores all the parent sets of size one and saves their BIC score in the closed list. Then it adds to the open list every parent set of size two, computing their BIC∗scores in constant time on the basis of the scores available from the closed list. It then proceeds as follows until all elements in open have been processed, or the time is expired. It extracts from open the parent set Π with the best BIC∗score; it computes its BIC score and adds it to the closed list. It then looks for all the possible expansions of Π obtained by adding a single variable Y , such that Π ∪Y is not present in open or closed. It adds them to open with their BIC∗(X, Π, Y ) scores. Eventually it also considers all the explored subsets of Π. It safely [7] prunes Π if any of its subsets yields a higher BIC score than Π. The algorithm returns the content of the closed list, pruned and ordered by the BIC score. Such list becomes the content of the so-called cache of scores for X. The procedure is repeated for every variable and can be easily parallelized. Figure 1 compares sequential ordering and independence selection. It shows that independence selection is more effective than sequential ordering because it biases the search towards the highestscoring parent sets. 4 Structure optimization The goal of structure optimization is to choose the overall highest scoring parent sets (measured by the sum of the local scores) without introducing directed cycles in the graph. We start from the approach proposed in [18] (which we call ordering-based search or OBS), which exploits the fact 4 500 1,000 −2,000 −1,800 −1,600 −1,400 Iteration BIC (a) Sequential ordering. 500 1,000 −2,000 −1,800 −1,600 −1,400 Iteration BIC (b) Indep. selection ordering. Figure 1: Exploration of the parent sets space for a given variable performed by sequential ordering and independence selection. Each point refers to a distinct parent set. that the optimal network can be found in time O(Ck), where C = Pn i=1 ci and ci is the number of elements in the cache of scores of Xi, if an ordering over the variables is given.3 Θ(k) is needed to check whether all the variables in a parent set for X come before X in the ordering (a simple array can be used as data structure for this checking). This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are sampled and evaluated (different techniques can be used for guiding the sampling). For each sampled total ordering ≺over variables X1, . . . , Xn, the network is consistent with the order if ∀Xi : ∀X ∈Πi : X ≺Xi. A network consistent with a given ordering automatically satisfies the acyclicity constraint. This allows us to choose independently the best parent set of each node. Moreover, for a given total ordering V1, . . . , Vn of the variables, the algorithm tries to improve the network by a greedy search swapping procedure: if there is a pair Vj, Vj+1 such that the swapped ordering with Vj in place of Vj+1 (and vice versa) yields better score for the network, then these nodes are swapped and the search continues. One advantage of this swapping over extra random orderings is that searching for it and updating the network (if a good swap is found) only takes time O((cj + cj+1) · kn) (which can be sped up as cj only is inspected for parents sets containing Vj+1, and cj+1 is only processed if Vj+1 has Vj as parent in the current network), while a new sampled ordering would take O(n+Ck) (the swapping approach is usually favourable if ci is Ω(n), which is a plausible assumption). We emphasize that the use of k here is sole with the purpose of analyzing the complexity of the methods, since our parent set identification approach does not rely on a fixed value for k. However, the consistency rule of OBS is quite restricting. While it surely refuses all cyclic structures, it also rules out some acyclic ones which could be captured by interpreting the ordering in a slightly different manner. We propose a novel consistency rule for a given ordering which processes the nodes in V1, . . . , Vn from Vn to V1 (OBS can do it in any order, as the local parent sets can be chosen independently) and we define the parent set of Vj such that it does not introduce a cycle in the current partial network. This allows back-arcs in the ordering from a node Vj to its successors, as long as this does not introduce a cycle. We call this idea acyclic selection OBS (or simply ASOBS). Because we need to check for cycles at each step of constructing the network for a given ordering, at a first glance the algorithm seems to be slower (time complexity of O(Cn) against O(Ck) for OBS; note this difference is only relevant as we intend to work with large values n). Surprisingly, we can implement it in the same overall time complexity of O(Ck) as follows. 1. Build and keep a Boolean square matrix m to mark which are the descendants of nodes (m(X, Y ) tells whether Y is descendant of X). Start it all false. 2. For each node Vj in the order, with j = n, . . . , 1: (a) Go through the parent sets and pick the best scoring one for which all contained parents are not descendants of Vj (this takes time O(cik) if parent sets are kept as lists). (b) Build a todo list with the descendants of Vj from the matrix representation and associate an empty todo list to all ancestors of Vj. (c) Start the todo lists of the parents of Vj with the descendants of Vj. (d) For each ancestor X of Vj (ancestors will be iteratively visited by following a depthfirst graph search procedure using the network built so far; we process a node after 3O(·), Ω(·) and Θ(·) shall be understood as usual asymptotic notation functions. 5 its children with non-empty todo lists have been already processed; the search stops when all ancestors are visited): i. For each element Y in the todo list of X, if m(X, Y ) is true, then ignore Y and move on; otherwise set m(X, Y ) to true and add Y to the todo of parents of X. Let us analyze the complexity of the method. Step 2a takes overall time O(Ck) (already considering the outer loop). Step 2b takes overall time O(n2) (already considering the outer loop). Steps 2c and 2(d)i will be analyzed based on the number of elements on the todo lists and the time to process them in an amortized way. Note that the time complexity is directly related to the number of elements that are processed from the todo lists (we can simply look to the moment that they leave a list, as their inclusion in the lists will be in equal number). We will now count the number of times we process an element from a todo list. This number is overall bounded (over all external loop cycles) by the number of times we can make a cell of matrix m turn from false to true (which is O(n2)) plus the number of times we ignore an element because the matrix cell was already set to true (which is at most O(n) per each Vj, as this is the maximum number of descendants of Vj and each of them can fall into this category only once, so again there are O(n2) times in total). In other words, each element being removed from a todo list is either ignored (matrix already set to true) or an entry in the matrix of descendants is changed from false to true, and this can only happen O(n2) times. Hence the total time complexity is O(Ck + n2), which is O(Ck) for any C greater than n2/k (a very plausible scenario, as each local cache of a variable usually has more than n/k elements). Moreover, we have the following interesting properties of this new method. Theorem 3. For a given ordering ≺, the network obtained by ASOBS has score equal than or greater to that obtained by OBS. Proof. It follows immediately from the fact that the consistency rule of ASOBS generalizes that of OBS, that is, for each node Vj with j = n, . . . , 1, ASOBS allows all parent sets allowed by OBS and also others (containing back-arcs). Theorem 4. For a given ordering ≺defined by V1, . . . , Vn and a current graph G consistent with ≺, if OBS consistency rule allows the swapping of Vj, Vj+1 and leads to improving the score of G, then the consistency rule of ASOBS allows the same swapping and achieves the same improvement in score. Proof. It follows immediately from the fact that the consistency rule of ASOBS generalizes that of OBS, so from a given graph G, if a swapping is possible under OBS rules, then it is also possible under ASOBS rules. 5 Experiments We compare three different approaches for parent set identification (sequential, greedy selection and independence selection) and three different approaches (Gobnilp, OBS and ASOBS) for structure optimization. This yields nine different approaches for structural learning, obtained by combining all the methods for parent set identification and structure optimization. Note that OBS has been shown in [18] to outperform other greedy-tabu search over structures, such as greedy hill-climbing and optimal-reinsertion-search methods [15]. We allow one minute per variable to each approach for parent set identification. We set the maximum in-degree to k = 6, a high value that allows learning even complex structures. Notice that our novel approach does not need a maximum in-degree. We set a maximum in-degree to put our approach and its competitors on the same ground. Once computed the scores of the parent sets we run each solver (Gobnilp, OBS, ASOBS) for 24 hours. For a given data set the computation is performed on the same machine. The explicit goal of each approach for both parent set identification and structure optimization is to maximize the BIC score. We then measure the BIC score of the Bayesian networks eventually obtained as performance indicator. The difference in the BIC score between two alternative networks is an asymptotic approximation of the logarithm of the Bayes factor. The Bayes factor is the ratio of the posterior probabilities of two competing models. Let us denote by ∆BIC1,2 =BIC1-BIC2 the difference between the BIC score of network 1 and network 2. Positive values of ∆BIC1,2 imply 6 Data set n Data set n Data set n Data set n Audio 100 Retail 135 MSWeb 294 Reuters-52 889 Jester 100 Pumsb-star 163 Book 500 C20NG 910 Netflix 100 DNA 180 EachMovie 500 BBC 1058 Accidents 111 Kosarek 190 WebKB 839 Ad 1556 Table 1: Data sets sorted according to the number n of variables. evidence in favor of network 1. The evidence in favor of network 1 is respectively [16] {weak, positive, strong, very strong} if ∆BIC1,2 is between {0 and 2; 2 and 6; 6 and 10 ; beyond 10}. 5.1 Learning from datasets We consider 16 data sets already used in the literature of structure learning, firstly introduced in [13] and [8]. We randomly split each data set into three subsets of instances. This yields 48 data sets. The approaches for parent set identification are compared in Table 2. For each fixed structure optimization approach, we learn the network starting from the list of parent sets computed by independence selection (IS), greedy selection (GS) and sequential selection (SQ). In turn we analyze ∆BICIS,GS and ∆BICIS,SQ. A positive ∆BIC means that independence selection yields a network with higher BIC score than the network obtained using an alternative approach for parent set identification; vice versa for negative values of ∆BIC. In most cases (see Table 2) ∆BIC>10, implying very strong support for the network learned using independence selection. We further analyze the results through a sign-test. The null hypothesis of the test is that the BIC score of the network learned under independence selection is smaller than or equivalent to the BIC score of the network learned using the alternative approach (greedy selection or sequential selection depending on the case). If a data set yields a ∆BIC which is {very negative, strongly negative, negative, neutral}, it supports the null hypothesis. If a data sets yields a BIC score which is {positive, strongly positive, extremely positive}, it supports the alternative hypothesis. Under any fixed structure solver, the sign test rejects the null hypothesis, providing significant evidence in favor of independence selection. In the following when we further cite the sign test we refer to same type of analysis: the sign test analyzes the counts of the ∆BIC which are in favor and against a given method. As for structure optimization, ASOBS achieves higher BIC score than OBS in all the 48 data sets, under every chosen approach for parent set identification. These results confirm the improvement of ASOBS over OBS, theoretically proven in Section 4. In most cases the ∆BIC in favor of ASOBS is larger than 10. The difference in favor of ASOBS is significant (sign test, p < 0.01) under every chosen approach for parent set identification. We now compare ASOBS and Gobnilp. On the smaller data sets (27 data sets with n < 500), Gobnilp significantly outperforms (sign test, p < 0.01) ASOBS under every chosen approach for parent set identification. On most of such data sets, the ∆BIC in favor of the network learned by Gobnilp is larger than 10. This outcome is expected, as Gobnilp is an exact solver and those data structure solver Gobnilp ASOBS OBS parent identification: IS vs GS SQ GS SQ GS SQ ∆BIC (K) Very positive (K >10) 44 38 44 30 44 32 Strongly positive (6<K <10) 0 0 0 4 1 0 Positive (2 <K <6) 0 4 2 3 0 2 Neutral (-2 <K <2) 2 3 0 4 2 4 Negative (-6 <K <-2) 0 1 2 1 0 2 Strongly negative (-10 <K <-6) 1 1 0 5 0 4 Very negative (K <-10) 1 1 0 1 1 4 p-value <0.01 <0.01 <0.01 <0.01 <0.01 <0.01 Table 2: Comparison of the approaches for parent set identification on 48 data sets. Given any fixed solver for structural optimization, IS results in significantly higher BIC scores than both GS and SQ. 7 parent identification Independence sel. Forward sel Sequential sel. structure solver: AS vs GP OB GP OB GP OB ∆BIC (K) Very positive (K >10) 21 21 20 21 19 21 Strongly positive (6<K<10) 0 0 0 0 0 0 Positive (2<K<6) 0 0 0 0 0 0 Neutral (-2<K<2) 0 0 0 0 0 0 Negative (-6<K<-2) 0 0 0 0 0 0 Strongly negative (-10<K<-6) 0 0 0 0 0 0 Very negative (K<-10) 0 0 1 0 2 0 p-value <0.01 <0.01 <0.01 <0.01 <0.01 <0.01 Table 3: Comparison between the structure optimization approaches on the 21 data sets with n ≥ 500. ASOBS (AS) outperforms both Gobnilp (GB) and OBS (OB), under any chosen approach for parent set identification. sets imply a relatively reduced search space. However the focus of this paper is on large data sets. On the 21 data sets with n ≥500, ASOBS outperforms Gobnilp (sign test, p < 0.01) under every chosen approach for parent set identification (Table 3). 5.2 Learning from data sets sampled from known networks In the next experiments we create data sets by sampling from known networks. We take the largest networks available in the literature: 4 andes (n=223), diabetes (n=413), pigs (n=441), link (n=724), munin (n=1041). Additionally we randomly generate other 15 networks: five networks of size 2000, five networks of size 4000, five networks of size 10000. Each variable has a number of states randomly drawn from 2 to 4 and a number of parents randomly drawn from 0 to 6. Overall we consider 20 networks. From each network we sample a data set of 5000 instances. We perform experiments and analysis as in the previous section. For the sake of brevity we do not add further tables of results. As for parent set identification, independence selection outperforms both greedy selection and sequential selection. The difference in favor of independence selection is significant (sign test, p-value <0.01) under every chosen structure optimization approach. The ∆BIC of the learned network is >10 in most cases. Take for instance Gobnilp for structure optimization. Then independence selection yields a ∆BIC>10 in 18/20 cases when compared to GS and ∆BIC>10 in 19/20 cases when compared to SQ. Similar results are obtained using the other solvers for structure optimization. Strong results support also ASOBS against OBS and Gobnilp. Under every approach for parent set identification, ∆BIC>10 is obtained in 20/20 cases when comparing ASOBS and OBS. The number of cases in which ASOBS obtains ∆BIC>10 when compared against Gobnilp ranges between 17/20 and 19/20 depending on the approach adopted for parent set selection. The superiority of ASOBS over both OBS and Gobnilp is significant (sign test, p < 0.01) under every approach for parent set identification. Moreover, we measured the Hamming distance between the moralized true structure and the learned structure. On the 21 data sets with n ≥500 ASOBS outperforms Gobnilp and OBS and IS outperforms GS and SQ (sign test, p < 0.01). The novel framework is thus superior in terms of both score and correctness of the retrieved structure. 6 Conclusion and future work Our novel approximated approach for structural learning of Bayesian Networks scales up to thousands of nodes without constraints on the maximum in-degree. The current results refer to the BIC score, but in future the methodology could be extended to other scoring functions. Acknowledgments Work partially supported by the Swiss NSF grant n. 200021 146606 / 1. 4http://www.bnlearn.com/bnrepository/ 8 References [1] M. Bartlett and J. Cussens. Integer linear programming for the Bayesian network structure learning problem. Artificial Intelligence, 2015. in press. [2] D. M. Chickering, C. Meek, and D. Heckerman. Large-sample learning of Bayesian networks is hard. In Proceedings of the 19st Conference on Uncertainty in Artificial Intelligence, UAI03, pages 124–133. Morgan Kaufmann, 2003. [3] G. F. Cooper and E. Herskovits. A Bayesian method for the induction of probabilistic networks from data. Machine Learning, 9(4):309–347, 1992. [4] J. Cussens. Bayesian network learning with cutting planes. In Proceedings of the 27st Conference Annual Conference on Uncertainty in Artificial Intelligence, UAI-11, pages 153–160. AUAI Press, 2011. [5] J. Cussens, B. Malone, and C. Yuan. IJCAI 2013 tutorial on optimal algorithms for learning Bayesian networks (https://sites.google.com/site/ijcai2013bns/slides), 2013. [6] C. P. de Campos and Q. Ji. Efficient structure learning of Bayesian networks using constraints. Journal of Machine Learning Research, 12:663–689, 2011. [7] C. P. de Campos, Z. Zeng, and Q. Ji. Structure learning of Bayesian networks using constraints. In Proceedings of the 26st Annual International Conference on Machine Learning, ICML-09, pages 113–120, 2009. [8] J. V. Haaren and J. Davis. Markov network structure learning: A randomized feature generation approach. In Proceedings of the 26st AAAI Conference on Artificial Intelligence, 2012. [9] D. Heckerman, D. Geiger, and D.M. Chickering. Learning Bayesian networks: The combination of knowledge and statistical data. Machine Learning, 20:197–243, 1995. [10] T. Jaakkola, D. Sontag, A. Globerson, and M. Meila. Learning Bayesian Network Structure using LP Relaxations. In Proceedings of the 13st International Conference on Artificial Intelligence and Statistics, AISTATS-10, pages 358–365, 2010. [11] M. Koivisto. Parent assignment is hard for the MDL, AIC, and NML costs. In Proceedings of the 19st annual conference on Learning Theory, pages 289–303. Springer-Verlag, 2006. [12] M. Koivisto and K. Sood. Exact Bayesian Structure Discovery in Bayesian Networks. Journal of Machine Learning Research, 5:549–573, 2004. [13] D. Lowd and J. Davis. Learning Markov network structure with decision trees. In Geoffrey I. Webb, Bing Liu 0001, Chengqi Zhang, Dimitrios Gunopulos, and Xindong Wu, editors, Proceedings of the 10st Int. Conference on Data Mining (ICDM2010), pages 334–343, 2010. [14] W. J. McGill. Multivariate information transmission. Psychometrika, 19(2):97–116, 1954. [15] A. Moore and W. Wong. Optimal reinsertion: A new search operator for accelerated and more accurate Bayesian network structure learning. In T. Fawcett and N. Mishra, editors, Proceedings of the 20st International Conference on Machine Learning, ICML-03, pages 552– 559, Menlo Park, California, August 2003. AAAI Press. [16] A. E. Raftery. Bayesian model selection in social research. Sociological methodology, 25:111– 164, 1995. [17] T. Silander and P. Myllymaki. A simple approach for finding the globally optimal Bayesian network structure. In Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, UAI-06, pages 445–452, 2006. [18] M. Teyssier and D. Koller. Ordering-based search: A simple and effective algorithm for learning Bayesian networks. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, UAI-05, pages 584–590, 2005. [19] C. Yuan and B. Malone. An improved admissible heuristic for learning optimal Bayesian networks. In Proceedings of the 28st Conference on Uncertainty in Artificial Intelligence, UAI-12, 2012. [20] C. Yuan and B. Malone. Learning optimal Bayesian networks: A shortest path perspective. Journal of Artificial Intelligence Research, 48:23–65, 2013. 9 | 2015 | 69 |
5,960 | Robust Portfolio Optimization Huitong Qiu Department of Biostatistics Johns Hopkins University Baltimore, MD 21205 hqiu7@jhu.edu Fang Han Department of Biostatistics Johns Hopkins University Baltimore, MD 21205 fhan@jhu.edu Han Liu Department of Operations Research and Financial Engineering Princeton University Princeton, NJ 08544 hanliu@princeton.edu Brian Caffo Department of Biostatistics Johns Hopkins University Baltimore, MD 21205 bcaffo@jhsph.edu Abstract We propose a robust portfolio optimization approach based on quantile statistics. The proposed method is robust to extreme events in asset returns, and accommodates large portfolios under limited historical data. Specifically, we show that the risk of the estimated portfolio converges to the oracle optimal risk with parametric rate under weakly dependent asset returns. The theory does not rely on higher order moment assumptions, thus allowing for heavy-tailed asset returns. Moreover, the rate of convergence quantifies that the size of the portfolio under management is allowed to scale exponentially with the sample size of the historical data. The empirical effectiveness of the proposed method is demonstrated under both synthetic and real stock data. Our work extends existing ones by achieving robustness in high dimensions, and by allowing serial dependence. 1 Introduction Markowitz’s mean-variance analysis sets the basis for modern portfolio optimization theory [1]. However, the mean-variance analysis has been criticized for being sensitive to estimation errors in the mean and covariance matrix of the asset returns [2, 3]. Compared to the covariance matrix, the mean of the asset returns is more influential and harder to estimate [4, 5]. Therefore, many studies focus on the global minimum variance (GMV) formulation, which only involves estimating the covariance matrix of the asset returns. Estimating the covariance matrix of asset returns is challenging due to the high dimensionality and heavy-tailedness of asset return data. Specifically, the number of assets under management is usually much larger than the sample size of exploitable historical data. On the other hand, extreme events are typical in financial asset prices, leading to heavy-tailed asset returns. To overcome the curse of dimensionality, structured covariance matrix estimators are proposed for asset return data. [6] considered estimators based on factor models with observable factors. [7, 8, 9] studied covariance matrix estimators based on latent factor models. [10, 11, 12] proposed to shrink the sample covariance matrix towards highly structured covariance matrices, including the identity matrix, order 1 autoregressive covariance matrices, and one-factor-based covariance matrix estimators. These estimators are commonly based on the sample covariance matrix. (sub)Gaussian tail assumptions are required to guarantee consistency. For heavy-tailed data, robust estimators of covariance matrices are desired. Classic robust covariance matrix estimators include M-estimators, minimum volume ellipsoid (MVE) and minimum covari1 ance determinant (MCD) estimators, S-estimators, and estimators based on data outlyingness and depth [13]. These estimators are specifically designed for data with very low dimensions and large sample sizes. For generalizing the robust estimators to high dimensions, [14] proposed the Orthogonalized Gnanadesikan-Kettenring (OGK) estimator, which extends [15]’s estimator by re-estimating the eigenvalues; [16, 17] studied shrinkage estimators based on Tyler’s M-estimator. However, although OGK is computationally tractable in high dimensions, consistency is only guaranteed under fixed dimension. The shrunken Tylor’s M-estimator involves iteratively inverting large matrices. Moreover, its consistency is only guaranteed when the dimension is in the same order as the sample size. The aforementioned robust estimators are analyzed under independent data points. Their performance under time series data is questionable. In this paper, we build on a quantile-based scatter matrix1 estimator, and propose a robust portfolio optimization approach. Our contributions are in three aspects. First, we show that the proposed method accommodates high dimensional data by allowing the dimension to scale exponentially with sample size. Secondly, we verify that consistency of the proposed method is achieved without any tail conditions, thus allowing for heavy-tailed asset return data. Thirdly, we consider weakly dependent time series, and demonstrate how the degree of dependence affects the consistency of the proposed method. 2 Background In this section, we introduce the notation system, and provide a review on the gross-exposure constrained portfolio optimization that will be exploited in this paper. 2.1 Notation Let v = (v1, . . . , vd)T be a d-dimensional real vector, and M = [Mjk] ∈Rd1×d2 be a d1 × d2 matrix with Mjk as the (j, k) entry. For 0 < q < ∞, we define the ℓq vector norm of v as ∥v∥q := (Pd j=1 |vj|)1/q and the ℓ∞vector norm of v as ∥v∥∞:= maxd j=1 |vj|. Let the matrix ℓmax norm of M be ∥M∥max := maxjk |Mjk|, and the Frobenius norm be ∥M∥F := qP jk M 2 jk. Let X = (X1, . . . , Xd)T and Y = (Y1, . . . , Yd)T be two random vectors. We write X d= Y if X and Y are identically distributed. We use 1, 2, . . . to denote vectors with 1, 2, . . . at every entry. 2.2 Gross-exposure Constrained GMV Formulation Under the GMV formulation, [18] found that imposing a no-short-sale constraint improves portfolio efficiency. [19] relaxed the no-short-sale constraint by a gross-exposure constraint, and showed that portfolio efficiency can be further improved. Let X ∈Rd be a random vector of asset returns. A portfolio is characterized by a vector of investment allocations, w = (w1, . . . , wd)T, among the d assets. The gross-exposure constrained GMV portfolio optimization can be formulated as min w wTΣw s.t. 1Tw = 1, ∥w∥1 ≤c. (2.1) Here 1Tw = 1 is the budget constraint, and ∥w∥1 ≤c is the gross-exposure constraint. c ≥1 is called the gross exposure constant, which controls the percentage of long and short positions allowed in the portfolio [19]. The optimization problem (2.1) can be converted into a quadratic programming problem, and solved by standard software [19]. 3 Method In this section, we introduce the quantile-based portfolio optimization approach. Let Z ∈R be a random variable with distribution function F, and {zt}T t=1 be a sequence of observations from Z. For a constant q ∈[0, 1], we define the q-quantiles of Z and {zt}T t=1 to be Q(Z; q) = Q(F; q) := inf{z : P(Z ≤z) ≥q}, bQ({zt}T t=1; q) := z(k) where k = min n t : t T ≥q o . 1A scatter matrix is defined to be any matrix proportional to the covariance matrix by a constant. 2 Here z(1) ≤. . . ≤z(T ) are the order statistics of {zt}T t=1. We say Q(Z; q) is unique if there exists a unique z such that P(Z ≤z) = q. We say bQ({zt}T t=1; q) is unique if there exists a unique z ∈{z1, . . . , zT } such that z = z(k). Following the estimator Qn [20], we define the population and sample quantile-based scales to be σQ(Z) := Q(|Z −eZ|; 1/4) and bσQ({zt}T t=1) := bQ({|zs −zt|}1≤s<t≤T ; 1/4). (3.1) Here eZ is an independent copy of Z. Based on σQ and bσQ, we can further define robust scatter matrices for asset returns. In detail, let X = (X1, . . . , Xd)T ∈Rd be a random vector representing the returns of d assets, and {Xt}T t=1 be a sequence of observations from X, where Xt = (Xt1, . . . , Xtd)T. We define the population and sample quantile-based scatter matrices (QNE) to be RQ := [RQ jk] and bRQ := [ bRQ jk], where the entries of RQ and bRQ are given by RQ jj := σQ(Xj)2, bRQ jj := bσQ({Xtj}T t=1)2, RQ jk := 1 4 h σQ(Xj + Xk)2 −σQ(Xj −Xk)2i , bRQ jk := 1 4 h bσQ({Xtj + Xtk}T t=1)2 −σQ({Xtj −Xtk}T t=1)2i . Since bσQ can be computed using O(T log T) time [20], the computational complexity of bRQ is O(d2T log T). Since T ≪d in practice, bRQ can be computed almost as efficiently as the sample covariance matrix, which has O(d2T) complexity. Let w = (w1, . . . , wd)T be the vector of investment allocations among the d assets. For a matrix M, we define a risk function R : Rd × Rd×d →R by R(w; M) := wTMw. When X has covariance matrix Σ, R(w; Σ) = Var(wTX) is the variance of the portfolio return, wTX, and is employed as the objected function in the GMV formulation. However, estimating Σ is difficult due to the heavy tails of asset returns. In this paper, we adopt R(w; RQ) as a robust alternative to the moment-based risk metric, R(w; Σ), and consider the following oracle portfolio optimization problem: wopt = argmin w R(w; RQ) s.t. 1Tw = 1, ∥w∥1 ≤c. (3.2) Here ∥w∥1 ≤c is the gross-exposure constraint introduced in Section 2.2. In practice, RQ is unknown and has to be estimated. For convexity of the risk function, we project bRQ onto the cone of positive definite matrices: eRQ = argminR
bRQ −R
max s.t. R ∈Sλ := {M ∈Rd×d : MT = M, λminId ⪯M ⪯λmaxId}. (3.3) Here λmin and λmax set the lower and upper bounds for the eigenvalues of eRQ. The optimization problem (3.3) can be solved by a projection and contraction algorithm [21]. We summarize the algorithm in the supplementary material. Using eRQ, we formulate the empirical robust portfolio optimization by ewopt = argmin w R(w; eRQ) s.t. 1Tw = 1, ∥w∥1 ≤c. (3.4) Remark 3.1. The robust portfolio optimization approach involves three parameters: λmin, λmax, and c. Empirically, setting λmin = 0.005 and λmax = ∞proves to work well. c is typically provided by investors for controlling the percentages of short positions. When a data-driven choice is desired, we refer to [19] for a cross-validation-based approach. Remark 3.2. The rationale behind the positive definite projection (3.3) lies in two aspects. First, in order that the portfolio optimization is convex and well conditioned, a positive definite matrix with lower bounded eigenvalues is needed. This is guaranteed by setting λmin > 0. Secondly, the projection (3.3) is more robust compared to the OGK estimate [14]. OGK induces positive definiteness by re-estimating the eigenvalues using the variances of the principal components. Robustness is lost when the data, possibly containing outliers, are projected onto the principal directions for estimating the principal components. 3 Remark 3.3. We adopt the 1/4 quantile in the definitions of σQ and bσQ to achieve 50% breakdown point. However, we note that our methodology and theory carries through if 1/4 is replaced by any absolute constant q ∈(0, 1). 4 Theoretical Properties In this section, we provide theoretical analysis of the proposed portfolio optimization approach. For an optimized portfolio, bwopt, based on an estimate, R, of RQ, the next lemma shows that the error between the risks R(bwopt; RQ) and R(wopt; RQ) is essentially related to the estimation error in R. Lemma 4.1. Let bwopt be the solution to min w R(w; R) s.t. 1Tw = 1, ∥w∥1 ≤c (4.1) for an arbitrary matrix R. Then, we have |R(bwopt; RQ) −R(wopt; RQ)| ≤2c2∥R −RQ∥max, where wopt is the solution to the oracle portfolio optimization problem (3.2), and c is the grossexposure constant. Next, we derive the rate of convergence for R(ewopt; RQ), which relates to the rate of convergence in ∥eRQ −RQ∥max. To this end, we first introduce a dependence condition on the asset return series. Definition 4.2. Let {Xt}t∈Z be a stationary process. Denote by F0 −∞:= σ(Xt : t ≤0) and F∞ n := σ(Xt : t ≥n) the σ-fileds generated by {Xt}t≤0 and {Xt}t≥n, respectively. The φ-mixing coefficient is defined by φ(n) := sup B∈F0 −∞,A∈F∞ n ,P(B)>0 |P(A | B) −P(A)|. The process {Xt}t∈Z is φ-mixing if and only if limn→∞φ(n) = 0. Condition 1. {Xt ∈Rd}t∈Z is a stationary process such that for any j ̸= k ∈{1, . . . , d}, {Xtj}t∈Z, {Xtj +Xtk}t∈Z, and {Xtj −Xtk}t∈Z are φ-mixing processes satisfying φ(n) ≤1/n1+ϵ for any n > 0 and some constant ϵ > 0. The parameter ϵ determines the rate of decay in φ(n), and characterizes the degree of dependence in {Xt}t∈Z. Next, we introduce an identifiability condition on the distribution function of the asset returns. Condition 2. Let f X = ( e X1, . . . , e Xd)T be an independent copy of X1. For any j ̸= k ∈{1, . . . , d}, let F1;j, F2;j,k, and F3;j,k be the distribution functions of |X1j −e Xj|, |X1j + X1k −e Xj −e Xk|, and |X1j −X1k −e Xj + e Xk|. We assume there exist constants κ > 0 and η > 0 such that inf |y−Q(F ;1/4)|≤κ d dy F(y) ≥η for any F ∈{F1;j, F2;j,k, F3;j,k : j ̸= k = 1, . . . , d}. Condition 2 guarantees the identifiability of the 1/4 quantiles, and is standard in the literature on quantile statistics [22, 23]. Based on Conditions 1 and 2, we can present the rates of convergence for bRQ and eRQ. Theorem 4.3. Let {Xt}t∈Z be an absolutely continuous stationary process satisfying Conditions 1 and 2. Suppose log d/T →0 as T →∞. Then, for any α ∈(0, 1) and T large enough , with probability no smaller than 1 −8α2, we have ∥bRQ −RQ∥max ≤rT . (4.2) Here the rate of convergence rT is defined by rT = max n 2 η2 hr 4(1 + 2Cϵ)(log d −log α) T + 4Cϵ T i2 , 4σQ max η hr 4(1 + 2Cϵ)(log d −log α) T + 4Cϵ T io , (4.3) where σQ max := max{σQ(Xj), σQ(Xj + Xk), σQ(Xj −Xk) : j ̸= k ∈{1, . . . , d}} and Cϵ := P∞ k=1 1/k1+ϵ. Moreover, if RQ ∈Sλ for Sλ defined in (3.3), we further have ∥eRQ −RQ∥max ≤2rT . (4.4) 4 The implications of Theorem 4.3 are as follows. 1. When the parameters η, ϵ, and σQ max do not scale with T, the rate of convergence reduces to OP ( p log d/T). Thus, the number of assets under management is allowed to scale exponentially with sample size T. Compared to similar rates of convergence obtained for sample-covariance-based estimators [24, 25, 9], we do not require any moment or tail conditions, thus accommodating heavy-tailed asset return data. 2. The effect of serial dependence on the rate of convergence is characterized by Cϵ. Specifically, as ϵ approaches 0, Cϵ = P∞ k=1 1/k1+ϵ increases towards infinity, inflating rT . ϵ is allowed to scale with T such that Cϵ = o(T/ log d). 3. The rate of convergence rT is inversely related to the lower bound, η, on the marginal density functions around the 1/4 quantiles. This is because when η is small, the distribution functions are flat around the 1/4 quantiles, making the population quantiles harder to estimate. Combining Lemma 4.1 and Theorem 4.3, we obtain the rate of convergence for R(ewopt; RQ). Theorem 4.4. Let {Xt}t∈Z be an absolutely continuous stationary process satisfying Conditions 1 and 2. Suppose that log d/T →0 as T →∞and RQ ∈Sλ. Then, for any α ∈(0, 1) and T large enough, we have |R(ewopt; RQ) −R(wopt; RQ)| ≤2c2rT , (4.5) where rT is defined in (4.3) and c is the gross-exposure constant. Theorem 4.4 shows that the risk of the estimated portfolio converges to the oracle optimal risk with parametric rate rT . The number of assets, d, is allowed to scale exponentially with sample size T. Moreover, the rate of convergence does not rely on any tail conditions on the distribution of the asset returns. For the rest of this section, we build the connection between the proposed robust portfolio optimization and its moment-based counterpart. Specifically, we show that they are consistent under the elliptical model. Definition 4.5. [26] A random vector X ∈Rd follows an elliptical distribution with location µ ∈ Rd and scatter S ∈Rd×d if and only if there exist a nonnegative random variable ξ ∈R, a matrix A ∈Rd×r with rank(A) = r, a random vector U ∈Rr independent from ξ and uniformly distributed on the r-dimensional sphere, Sr−1, such that X d= µ + ξAU. Here S = AAT has rank r. We denote X ∼ECd(µ, S, ξ). ξ is called the generating variate. Commonly used elliptical distributions include Gaussian distribution and t-distribution. Elliptical distributions have been widely used for modeling financial return data, since they naturally capture many stylized properties including heavy tails and tail dependence [27, 28, 29, 30, 31, 32]. The next theorem relates RQ and R(w; RQ) to their moment-based counterparts, Σ and R(w; Σ), under the elliptical model. Theorem 4.6. Let X = (X1, . . . , Xd)T ∼ECd(µ, S, ξ) be an absolutely continuous elliptical random vector and f X = ( e X1, . . . , e Xd)T be an independent copy of X. Then, we have RQ = mQS (4.6) for some constant mQ only depending on the distribution of X. Moreover, if 0 < Eξ2 < ∞, we have RQ = cQΣ and R(w; RQ) = cQR(w; Σ), (4.7) where Σ = Cov(X) is the covariance matrix of X, and cQ is a constant given by cQ =Q n(Xj −e Xj)2 Var(Xj) ; 1 4 o = Q n(Xj + Xk −e Xj −e Xk)2 Var(Xj + Xk) ; 1 4 o =Q n(Xj −Xk −e Xj + e Xk)2 Var(Xj −Xk) ; 1 4 o . (4.8) Here the last two inequalities hold when Var(Xj + Xk) > 0 and Var(Xj −Xk) > 0. 5 By Theorem 4.6, under the elliptical model, minimizing the robust risk metric, R(w; RQ), is equivalent with minimizing the standard moment-based risk metric, R(w; Σ). Thus, the robust portfolio optimization (3.2) is equivalent to its moment-based counterpart (2.1) in the population level. Plugging (4.7) into (4.5) leads to the following theorem. Theorem 4.7. Let {Xt}t∈Z be an absolutely continuous stationary process satisfying Conditions 1 and 2. Suppose that X1 ∼ECd(µ, S, ξ) follows an elliptical distribution with covariance matrix Σ, and log d/T →0 as T →∞. Then, we have |R(ewopt; Σ) −R(wopt; Σ)| ≤2c2 cQ rT , where c is the gross-exposure constant, cQ is defined in (4.8), and rT is defined in (4.3). Thus, under the elliptical model, the optimal portfolio, ewopt, obtained from the robust portfolio optimization also leads to parametric rate of convergence for the standard moment-based risk. 5 Experiments In this section, we investigate the empirical performance of the proposed portfolio optimization approach. In Section 5.1, we demonstrate the robustness of the proposed approach using synthetic heavy-tailed data. In Section 5.2, we simulate portfolio management using the Standard & Poor’s 500 (S&P 500) stock index data. The proposed portfolio optimization approach (QNE) is compared with three competitors. These competitors are constructed by replacing the covariance matrix Σ in (2.1) by commonly used covariance/scatter matrix estimators: 1. OGK: The orthogonalized Gnanadesikan-Kettenring estimator constructs a pilot scatter matrix estimate using a robust τ-estimator of scale, then re-estimates the eigenvalues using the variances of the principal components [14]. 2. Factor: The principal factor estimator iteratively solves for the specific variances and the factor loadings [33]. 3. Shrink: The shrinkage estimator shrinkages the sample covariance matrix towards a onefactor covariance estimator[10]. 5.1 Synthetic Data Following [19], we construct the covariance matrix of the asset returns using a three-factor model: Xj = bj1f1 + bj2f2 + bj3f3 + εj, j = 1, . . . , d, (5.1) where Xj is the return of the j-th stock, bjk is the loadings of the j-th stock on factor fk, and εj is the idiosyncratic noise independent of the three factors. Under this model, the covariance matrix of the stock returns is given by Σ = BΣfBT + diag(σ2 1, . . . , σ2 d), (5.2) where B = [bjk] is a d × 3 matrix consisting of the factor loadings, Σf is the covariance matrix of the three factors, and σ2 j is the variance of the noise εi. We adopt the covariance in (5.2) in our simulations. Following [19], we generate the factor loadings B from a trivariate normal distribution, Nd(µb, Σb), where the mean, µb, and covariance, Σb, are specified in Table 1. After the factor loadings are generated, they are fixed as parameters throughout the simulations. The covariance matrix, Σf, of the three factors is also given in Table 1. The standard deviations, σ1, . . . , σd, of the idiosyncratic noises are generated independently from a truncated gamma distribution with shape 3.3586 and scale 0.1876, restricting the support to [0.195, ∞). Again these standard deviations are fixed as parameters once they are generated. According to [19], these parameters are obtained by fitting the three-factor model, (5.1), using three-year daily return data of 30 Industry Portfolios from May 1, 2002 to Aug. 29, 2005. The covariance matrix, Σ, is fixed throughout the simulations. Since we are only interested in risk optimization, we set the mean of the asset returns to be µ = 0. The dimension of the stocks under consideration is fixed at d = 100. Given the covariance matrix Σ, we generate the asset return data from the following three distributions. D1: multivariate Gaussian distribution, Nd(0, Σ); 6 Table 1: Parameters for generating the covariance matrix in Equation (5.2). Parameters for factor loadings Parameters for factor returns µb Σb Σf 0.7828 0.02915 0.02387 0.01018 1.2507 -0.035 -0.2042 0.5180 0.02387 0.05395 -0.00697 -0.0350 0.3156 -0.0023 0.4100 0.01018 -0.00697 0.08686 -0.2042 -0.0023 0.1930 1.0 1.2 1.4 1.6 1.8 2.0 0.2 0.4 0.6 0.8 1.0 gross−exposure constant (c) risk Oracle QNE OGK Factor Shrink 1.0 1.2 1.4 1.6 1.8 2.0 0.2 0.4 0.6 0.8 1.0 gross−exposure constant (c) risk Oracle QNE OGK Factor Shrink 1.0 1.2 1.4 1.6 1.8 2.0 0.2 0.4 0.6 0.8 1.0 gross−exposure constant (c) risk Oracle QNE OGK Factor Shrink Gaussian multivariate t elliptical log-normal 1.0 1.2 1.4 1.6 1.8 2.0 0.0 0.2 0.4 0.6 0.8 1.0 gross−exposure constant (c) matching rate QNE OGK Factor Shrink 1.0 1.2 1.4 1.6 1.8 2.0 0.0 0.2 0.4 0.6 0.8 1.0 gross−exposure constant (c) matching rate QNE OGK Factor Shrink 1.0 1.2 1.4 1.6 1.8 2.0 0.0 0.2 0.4 0.6 0.8 1.0 gross−exposure constant (c) matching rate QNE OGK Factor Shrink Gaussian multivariate t elliptical log-normal Figure 1: Portfolio risks, selected number of stocks, and matching rates to the oracle optimal portfolios. D2: multivariate t distribution with degree of freedom 3 and covariance matrix Σ; D2: elliptical distribution with log-normal generating variate, log N(0, 2), and covariance matrix Σ. Under each distribution, we generate asset return series of half a year (T = 126). We estimate the covariance/scatter matrices using QNE and the three competitors, and plug them into (2.1) to optimize the portfolio allocations. We also solve (2.1) with the true covariance matrix, Σ, to obtain the oracle optimal portfolios as benchmarks. We range the gross-exposure constraint, c, from 1 to 2. The results are based on 1,000 simulations. Figure 1 shows the portfolio risks R(bw; Σ) and the matching rates between the optimized portfolios and the oracle optimal portfolios2. Here the matching rate is defined as follows. For two portfolios P1 and P2, let S1 and S2 be the corresponding sets of selected assets, i.e., the assets for which the weights, wi, are non-zero. The matching rate between P1 and P2 is defined as r(P1, P2) = |S1 T S2|/|S1 S S2|, where |S| denotes the cardinality of set S. We note two observations from Figure 1. (i) The four estimators leads to comparable portfolio risks under the Gaussian model D1. However, under heavy-tailed distributions D2 and D3, QNE achieves lower portfolio risk. (ii) The matching rates of QNE are stable across the three models, and are higher than the competing methods under heavy-tailed distributions D2 and D3. Thus, we conclude that QNE is robust to heavy tails in both risk minimization and asset selection. 5.2 Real Data In this section, we simulate portfolio management using the S&P 500 stocks. We collect 1,258 adjusted daily closing prices3 for 435 stocks that stayed in the S&P 500 index from January 1, 2003 2Due to the ℓ1 regularization in the gross-exposure constraint, the solution is generally sparse. 3The adjusted closing prices accounts for all corporate actions including stock splits, dividends, and rights offerings. 7 Table 2: Annualized Sharpe ratios, returns, and risks under 4 competing approaches, using S&P 500 index data. QNE OGK Factor Shrink Sharpe ratio c=1.0 2.04 1.64 1.29 0.92 c=1.2 1.89 1.39 1.22 0.74 c=1.4 1.61 1.24 1.34 0.72 c=1.6 1.56 1.31 1.38 0.75 c=1.8 1.55 1.48 1.41 0.78 c=2.0 1.53 1.51 1.43 0.83 return (in %) c=1.0 20.46 16.59 13.18 9.84 c=1.2 18.41 13.15 10.79 7.20 c=1.4 15.58 11.30 10.88 6.55 c=1.6 15.02 11.48 10.68 6.49 c=1.8 14.77 12.39 10.57 6.58 c=2.0 14.51 12.27 10.60 6.76 risk (in %) c=1.0 10.02 10.09 10.19 10.70 c=1.2 9.74 9.46 8.83 9.76 c=1.4 9.70 9.10 8.12 9.14 c=1.6 9.63 8.75 7.71 8.68 c=1.8 9.54 8.39 7.51 8.38 c=2.0 9.48 8.13 7.43 8.18 to December 31, 2007. Using the closing prices, we obtain 1,257 daily returns as the daily growth rates of the prices. We manage a portfolio consisting of the 435 stocks from January 1, 2003 to December 31, 20074. On days i = 42, 43, . . . , 1, 256, we optimize the portfolio allocations using the past 2 months stock return data (42 sample points). We hold the portfolio for one day, and evaluate the portfolio return on day i + 1. In this way, we obtain 1,215 portfolio returns. We repeat the process for each of the four methods under comparison, and range the gross-exposure constant c from 1 to 25. Since the true covariance matrix of the stock returns is unknown, we adopt the Sharpe ratio for evaluating the performances of the portfolios. Table 2 summarizes the annualized Sharpe ratios, mean returns, and empirical risks (i.e., standard deviations of the portfolio returns). We observe that QNE achieves the largest Sharpe ratios under all values of the gross-exposure constant, indicating the lowest risks under the same returns (or equivalently, the highest returns under the same risk). 6 Discussion In this paper, we propose a robust portfolio optimization framework, building on a quantile-based scatter matrix. We obtain non-asymptotic rates of convergence for the scatter matrix estimators and the risk of the estimated portfolio. The relations of the proposed framework with its moment-based counterpart are well understood. The main contribution of the robust portfolio optimization approach lies in its robustness to heavy tails in high dimensions. Heavy tails present unique challenges in high dimensions compared to low dimensions. For example, asymptotic theory of M-estimators guarantees consistency in the rate OP ( p d/n) even for non-Gaussian data [34, 35]. If d ≪n, statistical error diminishes rapidly with increasing n. However, when d ≫n, statistical error may scale rapidly with dimension. Thus, stringent tail conditions, such as subGaussian conditions, are required to guarantee consistency for moment-based estimators in high dimensions [36]. In this paper, based on quantile statistics, we achieve consistency for portfolio risk without assuming any tail conditions, while allowing d to scale nearly exponentially with n. Another contribution of his work lies in the theoretical analysis of how serial dependence may affect consistency of the estimation. We measure the degree of serial dependence using the φ-mixing coefficient, φ(n). We show that the effect of the serial dependence on the rate of convergence is summarized by the parameter Cϵ, which characterizes the size of P∞ n=1 φ(n). 4We drop the data after 2007 to avoid the financial crisis, when the stock prices are likely to violate the stationary assumption. 5c = 2 imposes a 50% upper bound on the percentage of short positions. In practice, the percentage of short positions is usually strictly controlled to be much lower. 8 References [1] Harry Markowitz. Portfolio selection. The Journal of Finance, 7(1):77–91, 1952. [2] Michael J Best and Robert R Grauer. On the sensitivity of mean-variance-efficient portfolios to changes in asset means: some analytical and computational results. Review of Financial Studies, 4(2):315–342, 1991. [3] Vijay Kumar Chopra and William T Ziemba. The effect of errors in means, variances, and covariances on optimal portfolio choice. The Journal of Portfolio Management, 19(2):6–11, 1993. [4] Robert C Merton. On estimating the expected return on the market: An exploratory investigation. Journal of Financial Economics, 8(4):323–361, 1980. [5] Jarl G Kallberg and William T Ziemba. Mis-specifications in portfolio selection problems. In Risk and Capital, pages 74–87. Springer, 1984. [6] Jianqing Fan, Yingying Fan, and Jinchi Lv. High dimensional covariance matrix estimation using a factor model. Journal of Econometrics, 147(1):186–197, 2008. [7] James H Stock and Mark W Watson. Forecasting using principal components from a large number of predictors. Journal of the American Statistical Association, 97(460):1167–1179, 2002. [8] Jushan Bai, Kunpeng Li, et al. Statistical analysis of factor models of high dimension. The Annals of Statistics, 40(1):436–465, 2012. [9] Jianqing Fan, Yuan Liao, and Martina Mincheva. Large covariance estimation by thresholding principal orthogonal complements. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 75(4):603–680, 2013. [10] Olivier Ledoit and Michael Wolf. Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. Journal of Empirical Finance, 10(5):603–621, 2003. [11] Olivier Ledoit and Michael Wolf. A well-conditioned estimator for large-dimensional covariance matrices. Journal of Multivariate Analysis, 88(2):365–411, 2004. [12] Olivier Ledoit and Michael Wolf. Honey, I shrunk the sample covariance matrix. The Journal of Portfolio Management, 30(4):110–119, 2004. [13] Peter J Huber. Robust Statistics. Wiley, 1981. [14] Ricardo A Maronna and Ruben H Zamar. Robust estimates of location and dispersion for highdimensional datasets. Technometrics, 44(4):307–317, 2002. [15] Ramanathan Gnanadesikan and John R Kettenring. Robust estimates, residuals, and outlier detection with multiresponse data. Biometrics, 28(1):81–124, 1972. [16] Yilun Chen, Ami Wiesel, and Alfred O Hero. Robust shrinkage estimation of high-dimensional covariance matrices. IEEE Transactions on Signal Processing, 59(9):4097–4107, 2011. [17] Romain Couillet and Matthew R McKay. Large dimensional analysis and optimization of robust shrinkage covariance matrix estimators. Journal of Multivariate Analysis, 131:99–120, 2014. [18] Ravi Jagannathan and T Ma. Risk reduction in large portfolios: Why imposing the wrong constraints helps. The Journal of Finance, 58(4):1651–1683, 2003. [19] Jianqing Fan, Jingjin Zhang, and Ke Yu. Vast portfolio selection with gross-exposure constraints. Journal of the American Statistical Association, 107(498):592–606, 2012. [20] Peter J Rousseeuw and Christophe Croux. Alternatives to the median absolute deviation. Journal of the American Statistical Association, 88(424):1273–1283, 1993. [21] M. H. Xu and H. Shao. Solving the matrix nearness problem in the maximum norm by applying a projection and contraction method. Advances in Operations Research, 2012:1–15, 2012. [22] Alexandre Belloni and Victor Chernozhukov. ℓ1-penalized quantile regression in high-dimensional sparse models. The Annals of Statistics, 39(1):82–130, 2011. [23] Lan Wang, Yichao Wu, and Runze Li. Quantile regression for analyzing heterogeneity in ultra-high dimension. Journal of the American Statistical Association, 107(497):214–222, 2012. [24] Peter J Bickel and Elizaveta Levina. Covariance regularization by thresholding. The Annals of Statistics, 36(6):2577–2604, 2008. [25] T Tony Cai, Cun-Hui Zhang, and Harrison H Zhou. Optimal rates of convergence for covariance matrix estimation. The Annals of Statistics, 38(4):2118–2144, 2010. [26] Kai-Tai Fang, Samuel Kotz, and Kai Wang Ng. Symmetric Multivariate and Related Distributions. Chapman and Hall, 1990. [27] Harry Joe. Multivariate Models and Dependence Concepts. Chapman and Hall, 1997. [28] Rafael Schmidt. Tail dependence for elliptically contoured distributions. Mathematical Methods of Operations Research, 55(2):301–327, 2002. [29] Svetlozar Todorov Rachev. Handbook of Heavy Tailed Distributions in Finance. Elsevier, 2003. [30] Svetlozar T Rachev, Christian Menn, and Frank J Fabozzi. Fat-tailed and Skewed Asset Return Distributions: Implications for Risk Management, Portfolio Selection, and Option Pricing. Wiley, 2005. [31] Kevin Dowd. Measuring Market Risk. Wiley, 2007. [32] Torben Gustav Andersen. Handbook of Financial Time Series. Springer, 2009. [33] Jushan Bai and Shuzhong Shi. Estimating high dimensional covariance matrices and its applications. Annals of Economics and Finance, 12(2):199–215, 2011. [34] Sara Van De Geer and SA Van De Geer. Empirical Processes in M-estimation. Cambridge University Press, Cambridge, 2000. [35] Alastair R Hall. Generalized Method of Moments. Oxford University Press, Oxford, 2005. [36] Peter B¨uhlmann and Sara Van De Geer. Statistics for High-dimensional Data: Methods, Theory and Applications. Springer, 2011. 9 | 2015 | 7 |
5,961 | Differentially Private Learning of Structured Discrete Distributions Ilias Diakonikolas∗ University of Edinburgh Moritz Hardt Google Research Ludwig Schmidt MIT Abstract We investigate the problem of learning an unknown probability distribution over a discrete population from random samples. Our goal is to design efficient algorithms that simultaneously achieve low error in total variation norm while guaranteeing Differential Privacy to the individuals of the population. We describe a general approach that yields near sample-optimal and computationally efficient differentially private estimators for a wide range of well-studied and natural distribution families. Our theoretical results show that for a wide variety of structured distributions there exist private estimation algorithms that are nearly as efficient—both in terms of sample size and running time—as their non-private counterparts. We complement our theoretical guarantees with an experimental evaluation. Our experiments illustrate the speed and accuracy of our private estimators on both synthetic mixture models and a large public data set. 1 Introduction The majority of available data in modern machine learning applications come in a raw and unlabeled form. An important class of unlabeled data is naturally modeled as samples from a probability distribution over a very large discrete domain. Such data occurs in almost every setting imaginable— financial transactions, seismic measurements, neurobiological data, sensor networks, and network traffic records, to name a few. A classical problem in this context is that of density estimation or distribution learning: Given a number of iid samples from an unknown target distribution, we want to compute an accurate approximation of the distribution. Statistical and computational efficiency are the primary performance criteria for a distribution learning algorithm. More specifically, we would like to design an algorithm whose sample size requirements are information-theoretically optimal, and whose running time is nearly linear in its sample size. Beyond computational and statistical efficiency, however, data analysts typically have a variety of additional criteria they must balance. In particular, data providers often need to maintain the anonymity and privacy of those individuals whose information was collected. How can we reveal useful statistics about a population, while still preserving the privacy of individuals? In this paper, we study the problem of density estimation in the presence of privacy constraints, focusing on the notion of differential privacy [1]. Our contributions. Our main findings suggest that the marginal cost of ensuring differential privacy in the context of distribution learning is only moderate. In particular, for a broad class of shape-constrained density estimation problems, we give private estimation algorithms that are nearly as efficient—both in terms of sample size and running time—as a nearly optimal non-private baseline. As our learning algorithm approximates the underlying distribution up to small error in total variation norm, all crucial properties of the underlying distribution are preserved. In particular, the analyst is free to compose our learning algorithm with an arbitrary non-private analysis. ∗The authors are listed in alphabetical order. 1 Our strong positive results apply to all distribution families that can be well-approximated by piecewise polynomial distributions, extending a recent line of work [2, 3, 4] to the differentially private setting. This is a rich class of distributions including several natural mixture models, log-concave distributions, and monotone distributions amongst many other examples. Our algorithm is agnostic so that even if the unknown distribution does not conform exactly to any of these distribution families, it continues to find a good approximation. These surprising positive results stand in sharp contrast with a long line of worst-case hardness results and lower bounds in differential privacy, which show separations between private and nonprivate learning in various settings. Complementing our theoretical guarantees, we present a novel heuristic method to achieve empirically strong performance. Our heuristic always guarantees privacy and typically converges rapidly. We show on various data sets that our method scales easily to input sizes that were previously prohibitive for any implemented differentially private algorithm. At the same time, the algorithm approaches the estimation error of the best known non-private method for a sufficiently large number of samples. Technical overview. We briefly introduce a standard model of learning an unknown probability distribution from samples (namely, that of [5]), which is essentially equivalent to the minimax rate of convergence in ℓ1-distance [6]. A distribution learning problem is defined by a class C of distributions. The algorithm has access to independent samples from an unknown distribution p, and its goal is to output a hypothesis distribution h that is “close” to p. We measure the closeness between distributions in total variation distance, which is equivalent to the ℓ1-distance and sometimes also called statistical distance. In the “noiseless” setting, we are promised that p ∈C, and the goal is to construct a hypothesis h such that (with high probability) the total variation distance dTV (h, p) between h and p is at most α, where α > 0 is the accuracy parameter. The more challenging “noisy” or agnostic model captures the situation of having arbitrary (or even adversarial) noise in the data. In this setting, we do not make any assumptions about the target distribution p and the goal is to find a hypothesis h that is almost as accurate as the “best” approximation of p by any distribution in C. Formally, given sample access to a (potentially arbitrary) target distribution p and α > 0, the goal of an agnostic learning algorithm for C is to compute a hypothesis distribution h such that dTV (h, p) ≤C · optC(p) + α, where optC(p) is the total variation distance between p and the closest distribution to it in C, and C ≥1 is a universal constant. It is a folklore fact that learning an arbitrary discrete distribution over a domain of size N to constant accuracy requires Ω(N) samples and running time. The underlying algorithm is straightforward: output the empirical distribution. For distributions over very large domains, a linear dependence on N is of course impractical, and one might hope that drastically better results can be obtained for most natural settings. Indeed, there are many natural and fundamental distribution estimation problems where significant improvements are possible. Consider for example the class of all unimodal distributions over [N]. In sharp contrast to the Ω(N) lower bound for the unrestricted case, an algorithm of Birgé [7] is known to learn any unimodal distribution over [N] with running time and sample complexity of O(log(N)). The starting point of our work is a recent technique [3, 8, 4] for learning univariate distributions via piecewise polynomial approximation. Our first main contribution is a generalization of this technique to the setting of approximate differential privacy. To achieve this result, we exploit a connection between structured distribution learning and private “Kolmogorov approximations”. More specifically, we show in Section 3 that, for the class of structured distributions we consider, a private algorithm that approximates an input histogram in the Kolmogorov distance combined with the algorithmic framework of [4] yields sample and computationally efficient private learners under the total variation distance. Our approach crucially exploits the structure of the underlying distributions, as the Kolmogorov distance is a much weaker metric than the total variation distance. Combined with a recent private algorithm [9], we obtain differentially private learners for a wide range of structured distributions over [N]. The sample complexity of our algorithms matches their non-private analogues up to a standard dependence on the privacy parameters and a multiplicative factor of at most O(2log∗N), 2 where log∗denotes the iterated logarithm function. The running time of our algorithm is nearlylinear in the sample size and logarithmic in the domain size. Related Work. There is a long history of research in statistics on estimating structured families of distributions going back to the 1950’s [10, 11, 12, 13], and it is still a very active research area [14, 15, 16]. Theoretical computer scientists have also studied these problems with an explicit focus on the computational efficiency [5, 17, 18, 19, 3]. In statistics, the study of inference questions under privacy constraints goes back to the classical work of Warner [20]. Recently, Duchi et al. [21, 22] study the trade-off between statistical efficiency and privacy in a local model of privacy obtaining sample complexity bounds for basic inference problems. We work in the non-local model and our focus is on both statistical and computational efficiency. There is a large literature on answering so-called “range queries” or “threshold queries” over an ordered domain subject to differential privacy. See, for example, [23] as well as the recent work [24] and many references therein. If the output of the algorithm represents a histogram over the domain that is accurate on all such queries, then this task is equivalent to approximating a sample in Kolmogorov distance, which is the task we consider. Apart from the work of Beimel et al. [25] and Bun et al. [9], to the best of our knowledge all algorithms in this literature (e.g., [23, 24]) have a running time that depends polynomially on the domain size N. Moreover, except for the aforementioned works, we know of no other algorithm that achieves a sub-logarithmic dependence on the domain size in its approximation guarantee. Of all algorithms in this area, we believe that ours is the first implemented algorithm that scales to very large domains with strong empirical performance as we demonstrate in Section 5. 2 Preliminaries Notation and basic definitions. For N ∈Z+, we write [N] to denote the set {1, . . . , N}. The ℓ1-norm of a vector v ∈RN (or equivalently, a function from [N] to R) is ∥v∥1 = PN i=1 |vi|. For a discrete probability distribution p : [N] →[0, 1], we write p(i) to denote the probability of element i ∈[N] under p. For a subset of the domain S ⊆[N], we write p(S) to denote P i∈S p(i). The total variation distance between two distributions p and q over [N] is dTV (p, q) def = maxS⊆[N] |p(S) −q(S)| = (1/2) · ∥p −q∥1. The Kolmogorov distance between p and q is defined as dK(p, q) def = maxj∈[N] | Pj i=1 p(i) −Pj i=1 q(i)|. Note that dK(p, q) ≤dTV (p, q). Given a set S of n independent samples s1, . . . , sn drawn from a distribution p : [N] →[0, 1], the empirical distribution bpn : [N] →[0, 1] is defined as follows: for all i ∈[N], bpn(i) = |{j ∈[n] | sj = i}| /n. Definition 1 (Distribution Learning). Let C be a family of distributions over a domain Ω. Given sample access to an unknown distribution p over Ωand 0 < α, β < 1, the goal of an (α, β)-agnostic learning algorithm for C is to compute a hypothesis distribution h such that with probability at least 1 −β it holds dTV (h, p) ≤C · optC(p) + α , where optC(p) := infq∈C dTV (q, p) and C ≥1 is a universal constant. Differential Privacy. A database D ∈[N]n is an n-tuple of items from [N]. Given a database D = (d1, . . . , dn), we let hist(D) denote the normalized histogram corresponding to D. That is, hist(D) = 1 n Pn i=1 edi, where ej denotes the j-th standard basis vector in RN. Definition 2 (Differential Privacy). A randomized algorithm M : [N]n →R (where R is some arbitrary range) is (ϵ, δ)-differentially private if for all pairs of inputs D, D′ ∈[N]n differing in only one entry, we have that for all subsets of the range S ⊆R, the algorithm satisfies: Pr[M(D) ∈S] ≤exp(ϵ) Pr[M(D′) ∈S] + δ. In the context of private distribution learning, the database D is the sample set S from the unknown target distribution p. In this case, the normalized histogram corresponding to D is the same as the empirical distribution corresponding to S, i.e., hist(S) = bpn(S). Basic tools from probability. We recall some probabilistic inequalities that will be crucial for our analysis. Our first tool is the well-known VC inequality. Given a family of subsets A over [N], define ∥p∥A = supA∈A |p(A)|. The VC–dimension of A is the maximum size of a subset X ⊆[N] that is shattered by A (a set X is shattered by A if for every Y ⊆X some A ∈A satisfies A ∩X = Y ). 3 Theorem 1 (VC inequality, [6, p. 31]). Let bpn be an empirical distribution of n samples from p. Let A be a family of subsets of VC–dimension k. Then E [∥p −bpn∥A] ≤O( p k/n). We note that the RHS above is best possible (up to constant factors) and independent of the domain size N. The Dvoretzky-Kiefer-Wolfowitz (DKW) inequality [26] can be obtained as a consequence of the VC inequality by taking A to be the class of all intervals. The DKW inequality implies that for n = Ω(1/ϵ2), with probability at least 9/10 (over the draw of n samples from p) the empirical distribution bpn will be ϵ-close to p in Kolmogorov distance. We will also use the following uniform convergence bound: Theorem 2 ([6, p. 17]). Let A be a family of subsets over [N], and bpn be an empirical distribution of n samples from p. Let X be the random variable ∥p −ˆp∥A. Then we have Pr [X −E[X] > η] ≤ e−2nη2. Connection to Synthetic Data. Distribution learning is closely related to the problem of generating synthetic data. Any dataset D of size n over a universe X can be interpreted as a distribution over the domain {1, . . . , |X|}. The weight of item x ∈X corresponds to the fraction of elements in D that are equal to x. In fact, this histogram view is convenient in a number of algorithms in Differential Privacy. If we manage to learn this unknown distribution, then we can take n samples from it obtain another synthetic dataset D′. Hence, the quality of the distribution learner dictates the statistical properties of the synthetic dataset. Learning in total variation distance is particularly appealing from this point of view. If two datasets represented as distributions p, q satisfy dTV (p, q) ≤α, then for every test function f : X →{0, 1} we must have that |Ex∼pf(x) −Ex∼qf(x)| ≤α. Put in different terminology, this means that the answer to any statistical query1 differs by at most α between the two distributions. 3 A Differentially Private Learning Framework In this section, we describe our private distribution learning upper bounds. We start with the simple case of privately learning an arbitrary discrete distribution over [N]. We then extend this bound to the case of privately and agnostically learning a histogram distribution over an arbitrary but known partition of [N]. Finally, we generalize the recent framework of [4] to obtain private agnostic learners for histogram distributions over an arbitrary unknown partition, and more generally piecewise polynomial distributions. Our first theorem gives a differentially private algorithm for arbitrary distributions over [N] that essentially matches the best non-private algorithm. Let CN be the family of all probability distributions over [N]. We have the following: Theorem 3. There is a computationally efficient (ϵ, 0)-differentially private (α, β)-learning algorithm for CN that uses n = O((N + log(1/β))/α2 + N log(1/β)/(ϵα)) samples. The algorithm proceeds as follows: Given a dataset S of n samples from the unknown target distribution p over [N], it outputs the hypothesis h = hist(S) + η = bpn(S) + η, where η ∈RN is sampled from the N-dimensional Laplace distribution with standard deviation 1/(ϵn). The simple analysis is deferred to Appendix A. A t-histogram over [N] is a function h : [N] →R that is piecewise constant with at most t interval pieces, i.e., there is a partition I of [N] into intervals I1, . . . , It such that h is constant on each Ii. Let Ht(I) be the family of all t-histogram distributions over [N] with respect to partition I = {I1, . . . , It}. Given sample access to a distribution p over [N], our goal is to output a hypothesis h : [N] →[0, 1] that satisfies dTV (h, p) ≤C · optt(p) + α, where optt(p) = infg∈Ht(I) dTV (g, p). We show the following: Theorem 4. There is a computationally efficient (ϵ, 0)-differentially private (α, β)-agnostic learning algorithm for Ht(I) that uses n = O((t + log(1/β))/α2 + t log(1/β)/(ϵα)) samples. The main idea of the proof is that the differentially private learning problem for Ht(I) can be reduced to the same problem over distributions of support [t]. The theorem then follows by an 1A statistical query asks for the average of a predicate over the dataset. 4 application of Theorem 3. See Appendix A for further details. Theorem 4 gives differentially private learners for any family of distributions over [N] that can be well-approximated by histograms over a fixed partition, including monotone distributions and distributions with a known mode. In the remainder of this section, we focus on approximate privacy, i.e., (ϵ, δ)-differential privacy for δ > 0, and show that for a wide range of natural and well-studied distribution families there exists a computationally efficient and differentially private algorithm whose sample size is at most a factor of 2O(log∗N) worse than its non-private counterpart. In particular, we give a differentially private version of the algorithm in [4]. For a wide range of distributions, our algorithm has near-optimal sample complexity and runs in time that is nearly-linear in the sample size and logarithmic in the domain size. We can view our overall private learning algorithm as a reduction. For the sake of concreteness, we state our approach for the case of histograms, the generalization to piecewise polynomials being essentially identical. Let Ht be the family of all t-histogram distributions over [N] (with unknown partition). In the non-private setting, a combination of Theorems 1 and 2 (see appendix) implies that Ht is (α, β)-agnostically learnable using n = Θ((t + log(1/β))/α2) samples. The algorithm of [4] starts with the empirical distribution bpn and post-processes it to obtain an (α, β)-accurate hypothesis h. Let Ak be the collection of subsets of [N] that can be expressed as unions of at most k disjoint intervals. The important property of the empirical distribution bpn is that with high probability, bpn is α-close to the target distribution p in Ak-distance for any k = O(t). The crucial observation that enables our generalization is that the algorithm of [4] achieves the same performance guarantees starting from any hypothesis q such that ∥p −q∥AO(t) ≤α.2 This observation motivates the following two-step differentially private algorithm: (1) Starting from the empirical distribution bpn, efficiently construct an (ϵ, δ)-differentially private hypothesis q such that with probability at least 1 −β/2 it holds ∥q −bpn∥AO(t) ≤α/2. (2) Pass q as input to the learning algorithm of [4] with parameters (α/2, β/2) and return its output hypothesis h. We now proceed to sketch correctness. Since q is (ϵ, δ)-differentially private and the algorithm of Step (2) is only a function of q, the composition theorem implies that h is also (ϵ, δ)-differentially private. Recall that with probability at least 1 −β/2 we have ∥p −bpn∥AO(t) ≤α/2. By the properties of q in Step (1), a union bound and an application of the triangle inequality imply that with probability at least 1 −β we have ∥p −q∥AO(t) ≤α. Hence, the output h of Step (2) is an (α, β)-accurate agnostic hypothesis. We have thus sketched a proof of the following lemma: Lemma 1. Suppose there is an (ϵ, δ)-differentially private synthetic data algorithm under the AO(t)–distance metric that is (α/2, β/2)-accurate on databases of size n, where n = Ω((t + log(1/β))/α2). Then, there exists an (α, β)-accurate agnostic learning algorithm for Ht with sample complexity n. Recent work of Bun et al. [9] gives an efficient differentially private synthetic data algorithm under the Kolmogorov distance metric: Proposition 1. [9] There is an (ϵ, δ)-differentially private (α, β)-accurate synthetic data algorithm with respect to dK–distance on databases of size n over [N], assuming n = Ω((1/(ϵα))·2O(log∗N) · ln(1/αβϵδ)). The algorithm runs in time O(n · log N). Note that the Kolmogorov distance is equivalent to the A2-distance up to a factor of 2. Hence, by applying the above proposition for α′ = α/t one obtains an (α, β)-accurate synthetic data algorithm with respect to the At-distance. Combining the above, we obtain the following: Theorem 5. There is an (ϵ, δ)-differentially private (α, β)-agnostic learning algorithm for Ht that uses n = O((t/α2) · ln(1/β) + (t/(ϵα)) · 2O(log∗N) · ln(1/αβϵδ)) samples and runs in time eO(n) + O(n · log N). As an immediate corollary of Theorem 5, we obtain nearly-sample optimal and computationally efficient differentially private estimators for all the structured discrete distribution families studied 2We remark that a potential difference is in the running time of the algorithm, which depends on the support and structure of the distribution q. 5 in [3, 4]. These include well-known classes of shape restricted densities including (mixtures of) unimodal and multimodal densities (with unknown mode locations), monotone hazard rate (MHR) and log-concave distributions, and others. Due to space constraints, we do not enumerate the full descriptions of these classes or statements of these results here but instead refer the interested reader to [3, 4]. 4 Maximum Error Rule for Private Kolmogorov Distance Approximation In this section, we describe a simple and fast algorithm for privately approximating an input histogram with respect to the Kolmogorov distance. Our private algorithm relies on a simple (nonprivate) iterative greedy algorithm to approximate a given histogram (empirical distribution) in Kolmogorov distance, which we term MAXIMUMERRORRULE. This algorithm performs a set of basic operations on the data and can be effectively implemented in the private setting. To describe the non-private version of MAXIMUMERRORRULE, we point out a connection of the Kolmogorov distance approximation problem to the problem of approximating a monotone univariate function with by a piecewise linear function. Let bpn be the empirical probability distribution over [N], and let bPn denote the corresponding empirical CDF. Note that bPn : [N] →[0, 1] is monotone non-decreasing and piecewise constant with at most n pieces. We would like to approximate bpn by a piecewise uniform distribution with a corresponding a piecewise linear CDF. It is easy to see that this is exactly the problem of approximating a monotone function by a piecewise linear function in ℓ∞-norm. The MAXIMUMERRORRULE works as follows: Starting with the trivial linear approximation that interpolates between (0, 0) and (N, 1), the algorithm iteratively refines its approximation to the target empirical CDF using a greedy criterion. In each iteration, it finds the point (x, y) of the true curve (empirical CDF bPn) at which the current piecewise linear approximation disagrees most strongly with the target CDF (in ℓ∞-norm). It then refines the previous approximation by adding the point (x, y) and interpolating linearly between the new point and the previous two adjacent points of the approximation. See Figure 1 for a graphical illustration of our algorithm. The MAXIMUMERRORRULE is a popular method for monotone curve approximation whose convergence rate has been analyzed under certain assumptions on the structure of the input curve. For example, if the monotone input curve satisfies a Lipschitz condition, it is known that the ℓ∞-error after T iterations scales as O(1/T 2) (see, e.g., [27] and references therein). There are a number of challenges towards making this algorithm differentially private. The first is that we cannot exactly select the maximum error point. Instead, we can only choose an approximate maximizer using a differentially private sub-routine. The standard algorithm for choosing such a point would be the exponential mechanism of McSherry and Talwar [28]. Unfortunately, this algorithm falls short of our goals in two respects. First, it introduces a linear dependence on the domain size in the running time making the algorithm prohibitively inefficient over large domains. Second, it introduces a logarithmic dependence on the domain size in the error of our approximation. In place of the exponential mechanism, we design a sub-routine using the “choosing mechanism” of Beimel, Nissim, and Stemmer [25]. Our sub-routine runs in logarithmic time in the domain size and achieves a doubly-logarithmic dependence in the error. See Figure 2 for a pseudocode of our algorithm. In the description of the algorithm, we think of At as a CDF defined by a sequence of points (0, 0), (x1, y1), ..., (xk, yk), (N, 1) specifying the value of the CDF at various discrete points of the domain. We denote by weight(I, At) ∈[0, 1] the weight of the interval I according to the CDF At, where the value at missing points in the domain is achieved by linear interpolation. In other words, At represents a piecewise-linear CDF (corresponding to a piecewise constant histogram). Similarly, we let weight(I, S) ∈[0, 1] denote the weight of interval I according to the sample S, that is, |S ∩I|/|S|. We show that our algorithm satisfies (ϵ, δ)-differential privacy (see Appendix B): Lemma 2. For every ϵ ∈(0, 2), δ > 0, MaximumErrorRule satisfies (ϵ, δ)-differential privacy. Next, we provide two performance guarantees for our algorithm. The first shows that the running time per iteration is at most O(n log N). The second shows that if at any step t there is a “bad” interval in I that has large error, then our algorithm finds such a bad interval where the quantitative 6 Figure 1: CDF approximation after T = 0, 1, 2, 3 iterations. MAXIMUMERRORRULE(S ∈[N]n, privacy parameters ϵ, δ) For t = 1 to T : 1. I = FINDBADINTERVAL(At−1, S) 2. At = UPDATE(At−1, S, I) FINDBADINTERVAL 1. Let I be the collection of all dyadic intervals of the domain. 2. For each J ∈I, let q(J; S) = |weight(J, At−1) −weight(J, S)|. 3. Output an I ∈I sampled from the choosing mechanism with score function q over the collection I with privacy parameters (ϵ/2T, δ/T). UPDATE 1. Let I = (l, r) be the input interval. Compute wl = weight([1, l], S) + Laplace(0, 1/(2nϵ)) and wr = weight([l + 1, r], S) + Laplace(0, 1/(2nϵ)). 2. Output the CDF obtained from At−1 by adding the points (l, wl) and (r, wl + wr) to the graph of At−1. Figure 2: Maximum Error Rule (MERR). loss depends only doubly-logarithmically on the domain size (see Appendix B for the proof of the following theorem). Proposition 2. MERR runs in time O(Tn log N). Furthermore, for every step t, with probability 1 −β, we have that the interval I selected at step t satisfies |weight(I, At−1) −weight(I, S)| ≥OPT −O 1 ϵn · log n log N · log(1/βϵδ) . Recall that OPT = maxJ∈I |weight(J, At−1) −weight(J, S)|. 5 Experiments In addition to our theoretical results from the previous sections, we also investigate the empirical performance of our private distribution learning algorithm based on the maximum error rule. The focus of our experiments is the learning error achieved by the private algorithm for various distributions. For this, we employ two types of data sets: multiple synthetic data sets derived from mixtures of well-known distributions (see Appendix C), and a data set from Higgs experiments [29]. The synthetic data sets allow us to vary a single parameter (in particular, the domain size) while keeping the remaining problem parameters constant. We have chosen a distribution from the Higgs data set because it gives rise to a large domain size. Our results show that the maximum error rule finds a good approximation of the underlying distribution, matching the learning error of a non-private baseline when the number of samples is sufficiently large. Moreover, our algorithm is very efficient and runs in less than 5 seconds for n = 107 samples on a domain of size N = 1018. We implemented our algorithm in the Julia programming language (v0.3) and ran the experiments on an Intel Core i5-4690K CPU (3.5 - 3.9 GHz, 6 MB cache). In all experiments involving our private learning algorithm, we set the privacy parameters to ϵ = 1 and δ = 1 n. Since the noise magnitude depends on 1 ϵn, varying ϵ has the same effect as varying the the sample size n. Similarly, changes in δ are related to changes in n, and therefore we only consider this setting of privacy parameters. 7 Higgs data. In addition to the synthetic data mentioned above, we use the lepton pT (transverse momentum) feature of the Higgs data set (see Figure 2e of [29]). The data set contains roughly 11 million samples, which we use as unknown distribution. Since the values are specified with 18 digits of accuracy, we interpret them as discrete values in [N] for N = 1018. We then generate a sample from this data set by taking the first n samples and pass this subset as input to our private distribution learning algorithm. This time, we measure the error as Kolmogorov distance between the hypothesis returned by our algorithm and the cdf given by the full set of 11 million samples. In this experiment (Figure 3), we again see that the maximum-error rule achieves a good learning error. Moreover, we investigate the following two aspects of the algorithm: (i) The number of steps taken by the maximum error rule influences the learning error. In particular, a smaller number of steps leads to a better approximation for small values of n, while more samples allow us to achieve a better error with a larger number of steps. (ii) Our algorithm is very efficient. Even for the largest sample size n = 107 and the largest number of MERR steps, our algorithm runs in less than 5 seconds. Note that on the same machine, simply sorting n = 107 floating point numbers takes about 0.6 seconds. Since our algorithm involves a sorting step, this shows that the overhead added by the maximum error rule is only about 7× compared to sorting. In particular, this implies that no algorithm that relies on sorted samples can outperform our algorithm by a large margin. Limitations and future work. As we previously saw, the performance of the algorithm varies with the number of iterations. Currently this is a parameter that must be optimized over separately, for example, by choosing the best run privately from the exponential mechanism. This is standard practice in the privacy literature, but it would be more appealing to find an adaptive method of choosing this parameter on the fly as the algorithm obtains more information about the data. There remains a gap in sample complexity between the private and the non-private algorithm. One reason for this are the relatively large constants in the privacy analysis of the choosing mechanism [9]. With a tighter privacy analysis, one could hope to reduce the sample size requirements of our algorithm by up to an order of magnitude. It is likely that our algorithm could also benefit from certain post-processing steps such as smoothing the output histogram. We did not evaluate such techniques here for simplicity and clarity of the experiments, but this is a promising direction. 103 104 105 106 107 10−3 10−2 10−1 100 Sample size n Kolmogorov-error Higgs data m = 4 m = 8 m = 12 m = 16 m = 20 103 104 105 106 107 10−1 100 Sample size n Running time (seconds) Higgs data Figure 3: Evaluation of our private learning algorithm on the Higgs data set. The left plot shows the Kolmogorov error achieved for various sample sizes n and number of steps taken by the maximum error rule (m). The right plot displays the corresponding running times of our algorithm. Acknowledgments Ilias Diakonikolas was supported by EPSRC grant EP/L021749/1 and a Marie Curie Career Integration grant. Ludwig Schmidt was supported by MADALGO and a grant from the MIT-Shell Initiative. 8 References [1] C. Dwork. The differential privacy frontier (extended abstract). In TCC, pages 496–502, 2009. [2] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Learning mixtures of structured distributions over discrete domains. In SODA, 2013. [3] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Efficient density estimation via piecewise polynomial approximation. In STOC, pages 604–613, 2014. [4] J. Acharya, I. Diakonikolas, J. Li, and L. Schmidt. Sample-Optimal Density Estimation in Nearly-Linear Time. Available at http://arxiv.org/abs/1506.00671, 2015. [5] M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R. Schapire, and L. Sellie. On the learnability of discrete distributions. In Proc. 26th STOC, pages 273–282, 1994. [6] L. Devroye and G. Lugosi. Combinatorial methods in density estimation. Springer Series in Statistics, Springer, 2001. [7] L. Birgé. Estimation of unimodal densities without smoothness assumptions. Annals of Statistics, 25(3):970–981, 1997. [8] S. Chan, I. Diakonikolas, R. Servedio, and X. Sun. Near-optimal density estimation in near-linear time using variable-width histograms. In NIPS, pages 1844–1852, 2014. [9] M. Bun, K. Nissim, U. Stemmer, and S. P. Vadhan. Differentially private release and learning of threshold functions. CoRR, abs/1504.07553, 2015. [10] U. Grenander. On the theory of mortality measurement. Skand. Aktuarietidskr., 39:125–153, 1956. [11] B.L.S. Prakasa Rao. Estimation of a unimodal density. Sankhya Ser. A, 31:23–36, 1969. [12] P. Groeneboom. Estimating a monotone density. In Proc. of the Berkeley Conference in Honor of Jerzy Neyman and Jack Kiefer, pages 539–555, 1985. [13] L. Birgé. Estimating a density under order restrictions: Nonasymptotic minimax risk. Ann. of Stat., pages 995–1012, 1987. [14] F. Balabdaoui and J. A. Wellner. Estimation of a k-monotone density: Limit distribution theory and the spline connection. The Annals of Statistics, 35(6):pp. 2536–2564, 2007. [15] L. D umbgen and K. Rufibach. Maximum likelihood estimation of a log-concave density and its distribution function: Basic properties and uniform consistency. Bernoulli, 15(1):40–68, 2009. [16] G. Walther. Inference and modeling with log-concave distributions. Stat. Science, 2009. [17] Y. Freund and Y. Mansour. Estimating a mixture of two product distributions. In COLT, 1999. [18] J. Feldman, R. O’Donnell, and R. Servedio. Learning mixtures of product distributions over discrete domains. In FOCS, pages 501–510, 2005. [19] C. Daskalakis, I. Diakonikolas, and R.A. Servedio. Learning k-modal distributions via testing. In SODA, pages 1371–1385, 2012. [20] S. L. Warner. Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias. Journal of the American Statistical Association, 60(309), 1965. [21] J. C. Duchi, M. I. Jordan, and M. J. Wainwright. Local privacy and statistical minimax rates. In FOCS, pages 429–438, 2013. [22] J. C. Duchi, M. J. Wainwright, and M. I. Jordan. Local privacy and minimax bounds: Sharp rates for probability estimation. In NIPS, pages 1529–1537, 2013. [23] M. Hardt, K. Ligett, and F. McSherry. A simple and practical algorithm for differentially-private data release. In NIPS, 2012. [24] C. Li, M. Hay, G. Miklau, and Y. Wang. A data- and workload-aware query answering algorithm for range queries under differential privacy. PVLDB, 7(5):341–352, 2014. [25] A. Beimel, K. Nissim, and U. Stemmer. Private learning and sanitization: Pure vs. approximate differential privacy. In RANDOM, pages 363–378, 2013. [26] A. Dvoretzky, J. Kiefer, and J. Wolfowitz. Asymptotic minimax character of the sample distribution function and of the classical multinomial estimator. Ann. Mathematical Statistics, 27(3):642–669, 1956. [27] G. Rote. The convergence rate of the sandwich algorithm for approximating convex functions. Computing, 48:337–361, 1992. [28] F. McSherry and K. Talwar. Mechanism design via differential privacy. In FOCS, pages 94–103, 2007. [29] Pierre Baldi, Peter Sadowski, and Daniel Whiteson. Searching for exotic particles in high-energy physics with deep learning. Nature Communications, (5), 2014. [30] C. Dwork, G. N. Rothblum, and S. Vadhan. Boosting and differential privacy. In FOCS, 2010. 9 | 2015 | 70 |
5,962 | Generative Image Modeling Using Spatial LSTMs Lucas Theis University of T¨ubingen 72076 T¨ubingen, Germany lucas@bethgelab.org Matthias Bethge University of T¨ubingen 72076 T¨ubingen, Germany matthias@bethgelab.org Abstract Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting. 1 Introduction The last few years have seen tremendous progress in learning useful image representations [6]. While early successes were often achieved through the use of generative models [e.g., 13, 23, 30], recent breakthroughs were mainly driven by improvements in supervised techniques [e.g., 20, 34]. Yet unsupervised learning has the potential to tap into the much larger source of unlabeled data, which may be important for training bigger systems capable of a more general scene understanding. For example, multimodal data is abundant but often unlabeled, yet can still greatly benefit unsupervised approaches [36]. Generative models provide a principled approach to unsupervised learning. A perfect model of natural images would be able to optimally predict parts of an image given other parts of an image and thereby clearly demonstrate a form of scene understanding. When extended by labels, the Bayesian framework can be used to perform semi-supervised learning in the generative model [19, 28] while it is less clear how to combine other unsupervised approaches with discriminative learning. Generative image models are also useful in more traditional applications such as image reconstruction [33, 35, 49] or compression [47]. Recently there has been a renewed strong interest in the development of generative image models [e.g., 4, 8, 10, 11, 18, 24, 31, 35, 45, 47]. Most of this work has tried to bring to bear the flexibility of deep neural networks on the problem of modeling the distribution of natural images. One challenge in this endeavor is to find the right balance between tractability and flexibility. The present article contributes to this line of research by introducing a fully tractable yet highly flexible image model. Our model combines multi-dimensional recurrent neural networks [9] with mixtures of experts. More specifically, the backbone of our model is formed by a spatial variant of long short-term memory (LSTM) [14]. One-dimensional LSTMs have been particularly successful in modeling text and speech [e.g., 38, 39], but have also been used to model the progression of frames in video [36] and very recently to model single images [11]. In contrast to earlier work on modeling images, here we use multi-dimensional LSTMs [9] which naturally lend themselves to the task of generative image modeling due to their spatial structure and ability to capture long-range correlations. 1 Pixels SLSTM units SLSTM units Pixels RIDE xij x<ij MCGSM A B C xij x<ij Figure 1: (A) We factorize the distribution of images such that the prediction of a pixel (black) may depend on any pixel in the upper-left green region. (B) A graphical model representation of an MCGSM with a causal neighborhood limited to a small region. (C) A visualization of our recurrent image model with two layers of spatial LSTMs. The pixels of the image are represented twice and some arrows are omitted for clarity. Through feedforward connections, the prediction of a pixel depends directly on its neighborhood (green), but through recurrent connections it has access to the information in a much larger region (red). To model the distribution of pixels conditioned on the hidden states of the neural network, we use mixtures of conditional Gaussian scale mixtures (MCGSMs) [41]. This class of models can be viewed as a generalization of Gaussian mixture models, but their parametrization makes them much more suitable for natural images. By treating images as instances of a stationary stochastic process, this model allows us to sample and capture the correlations of arbitrarily large images. 2 A recurrent model of natural images In the following, we first review and extend the MCGSM [41] and multi-dimensional LSTMs [9] before explaining how to combine them into a recurrent image model. Section 3 will demonstrate the validity of our approach by evaluating and comparing the model on a number of image datasets. 2.1 Factorized mixtures of conditional Gaussian scale mixtures One successful approach to building flexible yet tractable generative models has been to use fullyvisible belief networks [21, 27]. To apply such a model to images, we have to give the pixels an ordering and specify the distribution of each pixel conditioned on its parent pixels. Several parametrizations have been suggested for the conditional distributions in the context of natural images [5, 15, 41, 44, 45]. We here review and extend the work of Theis et al. [41] who proposed to use mixtures of conditional Gaussian scale mixtures (MCGSMs). Let x be a grayscale image patch and xij be the intensity of the pixel at location ij. Further, let x<ij designate the set of pixels xmn such that m < i or m = i and n < j (Figure 1A). Then p(x; θ) = Q i,j p(xij | x<ij; θ) (1) for the distribution of any parametric model with parameters θ. Note that this factorization does not make any independence assumptions but is simply an application of the probability chain rule. Further note that the conditional distributions all share the same set of parameters. One way to improve the representational power of a model is thus to endow each conditional distribution with its own set of parameters, p(x; {θij}) = Q i,j p(xij | x<ij; θij). (2) Applying this trick to mixtures of Gaussian scale mixtures (MoGSMs) yields the MCGSM [40]. Untying shared parameters can drastically increase the number of parameters. For images, it can easily be reduced again by adding assumptions. For example, we can limit x<ij to a smaller neighborhood surrounding the pixel by making a Markov assumption. We will refer to the resulting set of parents as the pixel’s causal neighborhood (Figure 1B). Another reasonable assumption is stationarity or shift invariance, in which case we only have to learn one set of parameters θij which can then 2 be used at every pixel location. Similar to convolutions in neural networks, this allows the model to easily scale to images of arbitrary size. While this assumption reintroduces parameter sharing constraints into the model, the constraints are different from the ones induced by the joint mixture model. The conditional distribution in an MCGSM takes the form of a mixture of experts, p(xij | x<ij, θij) = X c,s p(c, s | x<ij, θij) | {z } gate p(xij | x<ij, c, s, θij) | {z } expert , (3) where the sum is over mixture component indices c corresponding to different covariances and scales s corresponding to different variances. The gates and experts in an MCGSM are given by p(c, s | x<ij) ∝exp ηcs −1 2eαcsx⊤ <ijKcx<ij , (4) p(xij | x<ij, c, s) = N(xij; a⊤ c x<ij, e−αcs), (5) where Kc is positive definite. The number of parameters of an MCGSM still grows quadratically with the dimensionality of the causal neighborhood. To further reduce the number of parameters, we introduce a factorized form of the MCGSM with additional parameter sharing by replacing Kc with P n β2 cnbnb⊤ n . This factorized MCGSM allows us to use larger neighborhoods and more mixture components. A detailed derivation of a more general version which also allows for multivariate pixels is given in Supplementary Section 1. 2.2 Spatial long short-term memory In the following we briefly describe the spatial LSTM (SLSTM), a special case of the multidimensional LSTM first described by Graves & Schmidhuber [9]. At the core of the model are memory units cij and hidden units hij. For each location ij on a two-dimensional grid, the operations performed by the spatial LSTM are given by cij = gij ⊙iij + ci,j−1 ⊙f c ij + ci−1,j ⊙f r ij, hij = tanh (cij ⊙oij) , gij oij iij f r ij f c ij = tanh σ σ σ σ TA,b x<ij hi,j−1 hi−1,j ! , (6) where σ is the logistic sigmoid function, ⊙indicates a pointwise product, and TA,b is an affine transformation which depends on the only parameters of the network A and b. The gating units iij and oij determine which memory units are affected by the inputs through gij, and which memory states are written to the hidden units hij. In contrast to a regular LSTM defined over time, each memory unit of a spatial LSTM has two preceding states ci,j−1 and ci−1,j and two corresponding forget gates f c ij and f r ij. 2.3 Recurrent image density estimator We use a grid of SLSTM units to sequentially read relatively small neighborhoods of pixels from the image, producing a hidden vector at every pixel. The hidden states are then fed into a factorized MCGSM to predict the state of the corresponding pixel, that is, p(xij | x<ij) = p(xij | hij). Importantly, the state of the hidden vector only depends on pixels in x<ij and does not violate the factorization given in Equation 1. Nevertheless, the recurrent network allows this recurrent image density estimator (RIDE) to use pixels of a much larger region for prediction, and to nonlinearly transform the pixels before applying the MCGSM. We can further increase the representational power of the model by stacking spatial LSTMs to obtain a deep yet still completely tractable recurrent image model (Figure 1C). 2.4 Related work Larochelle & Murray [21] derived a tractable density estimator (NADE) in a manner similar to how the MCGSM was derived [41], but using restricted Boltzmann machines (RBM) instead of mixture models as a starting point. In contrast to the MCGSM, NADE tries to keep the weight sharing 3 constraints induced by the RBM (Equation 1). Uria et al. extended NADE to real values [44] and introduced hidden layers to the model [45]. Gregor et al. [10] describe a related autoregressive network for binary data which additionally allows for stochastic hidden units. Gregor et al. [11] used one-dimensional LSTMs to generate images in a sequential manner (DRAW). Because the model was defined over Bernoulli variables, normalized RGB values had to be treated as probabilities, making a direct comparison with other image models difficult. In contrast to our model, the presence of stochastic latent variables in DRAW means that its likelihood cannot be evaluated but has to be approximated. Ranzato et al. [31] and Srivastava et al. [37] use one-dimensional recurrent neural networks to model videos, but recurrency is not used to describe the distribution over individual frames. Srivastava et al. [37] optimize a squared error corresponding to a Gaussian assumption, while Ranzato et al. [31] try to side-step having to model pixel intensities by quantizing image patches. In contrast, here we also try to solve the problem of modeling pixel intensities by using an MCGSM, which is equipped to model heavy-tailed as well as multi-modal distributions. 3 Experiments RIDE was trained using stochastic gradient descent with a batch size of 50, momentum of 0.9, and a decreasing learning rate varying between 1 and 10−4. After each pass through the training set, the MCGSM of RIDE was finetuned using L-BFGS for up to 500 iterations before decreasing the learning rate. No regularization was used except for early stopping based on a validation set. Except where indicated otherwise, the recurrent model used a 5 pixel wide neighborhood and an MCGSM with 32 components and 32 quadratic features (bn in Section 2.1). Spatial LSTMs were implemented using the Caffe framework [17]. Where appropriate, we augmented the data by horizontal or vertical flipping of images. We found that conditionally whitening the data greatly sped up the training process of both models. Letting y represent a pixel and x its causal neighborhood, conditional whitening replaces these with ˆx = C −1 2 xx (x −mx) , ˆy = W(y −CyxC −1 2 xx ˆx −my), W = (Cyy −CyxC−1 xxC⊤ yx)−1 2 , (7) where Cyx is the covariance of y and x, and mx is the mean of x. In addition to speeding up training, this variance normalization step helps to make the learning rates less dependent on the training data. When evaluating the conditional log-likelihood, we compensate for the change in variance by adding the log-Jacobian log | det W|. Note that this preconditioning introduces a shortcut connection from the pixel neighborhood to the predicted pixel which is not shown in Figure 1C. 3.1 Ensembles Uria et al. [45] found that forming ensembles of their autoregressive model over different pixel orderings significantly improved performance. We here consider a simple trick to produce an ensemble without the need for training different models or to change training procedures. If Tk are linear transformations leaving the targeted image distribution invariant (or approximately invariant) and if p is the distribution of a pretrained model, then we form the ensemble 1 K P k p(Tkx)| det Tk|. Note that this is simply a mixture model over images x. We considered rotating as well as flipping images along the horizontal and vertical axes (yielding an ensemble over 8 transformations). While it could be argued that most of these transformations do not leave the distribution over natural images invariant, we nevertheless observed a noticeable boost in performance. 3.2 Natural images Several recent image models have been evaluated on small image patches sampled from the Berkeley segmentation dataset (BSDS300) [25]. Although our model’s strength lies in its ability to scale to large images and to capture long-range correlations, we include results on BSDS300 to make a connection to this part of the literature. We followed the protocol of Uria et al. [44]. The RGB images were turned to grayscale, uniform noise was added to account for the integer discretization, and the resulting values were divided by 256. The training set of 200 images was split into 180 images for training and 20 images for validation, while the test set contained 100 images. We 4 Model 63 dim. [nat] 64 dim. [bit/px] ∞dim. [bit/px] RNADE [44] 152.1 3.346 RNADE, 1 hl [45] 143.2 3.146 RNADE, 6 hl [45] 155.2 3.416 EoRNADE, 6 layers [45] 157.0 3.457 GMM, 200 comp. [47, 50] 153.7 3.360 STM, 200 comp. [46] 155.3 3.418 Deep GMM, 3 layers [47] 156.2 3.439 MCGSM, 16 comp. 155.1 3.413 3.688 MCGSM, 32 comp. 155.8 3.430 3.706 MCGSM, 64 comp. 156.2 3.439 3.716 MCGSM, 128 comp. 156.4 3.443 3.717 EoMCGSM, 128 comp. 158.1 3.481 3.748 RIDE, 1 layer 150.7 3.293 3.802 RIDE, 2 layers 152.1 3.346 3.869 EoRIDE, 2 layers 154.5 3.400 3.899 Table 1: Average log-likelihoods and log-likelihood rates for image patches (without/with DC comp.) and large images extracted from BSDS300 [25]. Model 256 dim. [bit/px] ∞dim. [bit/px] GRBM [13] 0.992 ICA [1, 48] 1.072 GSM 1.349 ISA [7, 16] 1.441 MoGSM, 32 comp. [40] 1.526 MCGSM, 32 comp. 1.615 1.759 RIDE, 1 layer, 64 hid. 1.650 1.816 RIDE, 1 layer, 128 hid. 1.830 RIDE, 2 layers, 64 hid. 1.829 RIDE, 2 layers, 128 hid. 1.839 EoRIDE, 2 layers, 128 hid. 1.859 Table 2: Average log-likelihood rates for image patches and large images extracted from van Hateren’s dataset [48]. extracted 8 by 8 image patches from each set and subtracted the average pixel intensity such that each patch’s DC component was zero. Because the resulting image patches live on a 63 dimensional subspace, the bottom-right pixel was discarded. We used 1.6 · 106 patches for training, 1.8 · 105 patches for validation, and 106 test patches for evaluation. MCGSMs have not been evaluated on this dataset and so we first tested MCGSMs by training a single factorized MCGSM for each pixel conditioned on all previous pixels in a fixed ordering. We find that already an MCGSM (with 128 components and 48 quadratic features) outperforms all single models including a deep Gaussian mixture model [46] (Table 1). Our ensemble of MCGSMs1 outperforms an ensemble of RNADEs with 6 hidden layers, which to our knowledge is currently the best result reported on this dataset. Training the recurrent image density estimator (RIDE) on the 63 dimensional dataset is more cumbersome. We tried padding image patches with zeros, which was necessary to be able to compute a hidden state at every pixel. The bottom-right pixel was ignored during training and evaluation. This simple approach led to a reduction in performance relative to the MCGSM (Table 1). A possible explanation is that the model cannot distinguish between pixel intensities which are zero and zeros in the padded region. Supplying the model with additional binary indicators as inputs (one for each neighborhood pixel) did not solve the problem. However, we found that RIDE outperforms the MCGSM by a large margin when images were treated as instances of a stochastic process (that is, using infinitely large images). MCGSMs were trained for up to 3000 iterations of L-BFGS on 106 pixels and corresponding causal neighborhoods extracted from the training images. Causal neighborhoods were 9 pixels wide and 5 pixels high. RIDE was trained for 8 epochs on image patches of increasing size ranging from 8 by 8 to 22 by 22 pixels (that is, gradients were approximated as in backpropagation through time [32]). The right column in Table 1 shows average log-likelihood rates for both models. Analogously to the entropy rate [3], we have for the expected log-likelihood rate: lim N→∞E log p(x)/N 2 = E[log p(xij | x<ij)], (8) where x is an N by N image patch. An average log-likelihood rate can be directly computed for the MCGSM, while for RIDE and ensembles we approximated it by splitting the test images into 64 by 64 patches and evaluating on those. To make the two sets of numbers more comparable, we transformed nats as commonly reported on the 63 dimensional data, ℓ1:63, into a bit per pixel log-likelihood rate using the formula (ℓ1:63+ℓDC+ ln | det A|)/64/ ln(2). This takes into account a log-likelihood for the missing DC component, 1Details on how the ensemble of transformations can be applied despite the missing bottom-right pixel are given in Supplementary Section 2.1. 5 Model [bit/px] MCGSM, 12 comp. [41] 1.244 MCGSM, 32 comp. 1.294 Diffusion [35] 1.489 RIDE, 64 hid., 1 layer 1.402 RIDE, 64 hid., 1 layer, ext. 1.416 RIDE, 64 hid., 2 layers 1.438 RIDE, 64 hid., 3 layers 1.454 RIDE, 128 hid., 3 layers 1.489 EoRIDE, 128 hid., 3 layers 1.501 Table 3: Average log-likelihood rates on dead leaf images. A deep recurrent image model is on a par with a deep diffusion model [35]. Using ensembles we are able to further improve the likelihood. 3 5 7 9 11 13 1 1.1 1.2 1.3 1.4 1.5 Neighborhood size Log-likelihood [bit/px] MCGSM RIDE Figure 2: Model performance on dead leaves as a function of the causal neighborhood width. Simply increasing the neighborhood size of the MCGSM is not sufficient to improve performance. ℓDC = 0.5020, and the Jacobian of the transformations applied during preprocessing, ln | det A| = −4.1589 (see Supplementary Section 2.2 for details). The two rates in Table 1 are comparable in the sense that their differences express how much better one model would be at losslessly compressing BSDS300 test images than another, where patch-based models would compress patches of an image independently. We highlighted the best result achieved with each model in gray. Note that most models in this list do not scale as well to large images as the MCGSM or RIDE (GMMs in particular) and are therefore unlikely to benefit as much from increasing the patch size. A comparison of the log-likelihood rates reveals that an MCGSM with 16 components applied to large images already captures more correlations than any model applied to small image patches. The difference is particularly striking given that the factorized MCGSM has approximately 3,000 parameters while a GMM with 200 components has approximately 400,000 parameters. Using an ensemble of RIDEs, we are able to further improve this number significantly (Table 1). Another dataset frequently used to test generative image models is the dataset published by van Hateren and van der Schaaf [48]. Details of the preprocessing used in this paper are given in Supplementary Section 3. We reevaluated several models for which the likelihood has been reported on this dataset [7, 40, 41, 42]. Likelihood rates as well as results on 16 by 16 patches are given in Table 2. Because of the larger patch size, RIDE here already outperforms the MCGSM on patches. 3.3 Dead leaves Dead leaf images are generated by superimposing disks of random intensity and size on top of each other [22, 26]. This simple procedure leads to images which already share many of the statistical properties and challenges of natural images, such as occlusions and long-range correlations, while leaving out others such as non-stationary statistics. They therefore provide an interesting test case for natural image models. We used a set of 1,000 images, where each image is 256 by 256 pixels in size. We compare the performance of RIDE to the MCGSM and a very recently introduced deep multiscale model based on a diffusion process [35]. The same 100 images as in previous literature [35, 41] were used for evaluation and we used the remaining images for training. We find that the introduction of an SLSTM with 64 hidden units greatly improves the performance of the MCGSM. We also tried an extended version of the SLSTM which included memory units as additional inputs (right-hand side of Equation 6). This yielded a small improvement in performance (5th row in Table 3) while adding layers or using more hidden units led to more drastic improvements. Using 3 layers with 128 hidden units in each layer, we find that our recurrent image model is on a par with the deep diffusion model. By using ensembles, we are able to beat all previously published results for this dataset (Table 3). Figure 2 shows that the improved performance of RIDE is not simply due to an effectively larger causal neighborhood but that the nonlinear transformations performed by the SLSTM units matter. Simply increasing the neighborhood size of an MCGSM does not yield the same improvement. Instead, the performance quickly saturates. We also find that the performance of RIDE slightly deteriorates with larger neighborhoods, which is likely caused by optimization difficulties. 6 D106 D93 D12 D104 D34 D110 Figure 3: From top to bottom: A 256 by 256 pixel crop of a texture [2], a sample generated by an MCGSM trained on the full texture [7], and a sample generated by RIDE. This illustrates that our model can capture a variety of different statistical patterns. The addition of the recurrent neural network seems particularly helpful where there are strong long-range correlations (D104, D34). 3.4 Texture synthesis and inpainting To get an intuition for the kinds of correlations which RIDE can capture or fails to capture, we tried to use it to synthesize textures. We used several 640 by 640 pixel textures published by Brodatz [2]. The textures were split into sixteen 160 by 160 pixel regions of which 15 were used for training and one randomly selected region was kept for testing purposes. RIDE was trained for up to 6 epochs on patches of increasing size ranging from 20 by 20 to 40 by 40 pixels. Samples generated by an MCGSM and RIDE are shown in Figure 3. Both models are able to capture a wide range of correlation structures. However, the MCGSM seems to struggle with textures having bimodal marginal distributions and periodic patterns (D104, D34, and D110). RIDE clearly improves on these textures, although it also struggles to faithfully reproduce periodic structure. Possible explanations include that LSTMs are not well suited to capture periodicities, or that these failures are not penalized strong enough by the likelihood. For some textures, RIDE produces samples which are nearly indistinguishable from the real textures (D106 and D110). One application of generative image models is inpainting [e.g., 12, 33, 35]. As a proof of concept, we used our model to inpaint a large (here, 71 by 71 pixels) region in textures (Figure 4). Missing pixels were replaced by sampling from the posterior of RIDE. Unlike the joint distribution, the posterior distribution cannot be sampled directly and we had to resort to Markov chain Monte Carlo methods. We found the following Metropolis within Gibbs [43] procedure to be efficient enough. The missing pixels were initialized via ancestral sampling. Since ancestral sampling is cheap, we generated 5 candidates and used the one with the largest posterior density. Following initialization, we sequentially updated overlapping 5 by 5 pixel regions via Metropolis sampling. Proposals were generated via ancestral sampling and accepted using the acceptance probability α = min n 1, p(x′) p(x) p(xij|x<ij) p(x′ ij|x<ij) o , (9) where here xij represents a 5 by 5 pixel patch and x′ ij its proposed replacement. Since evaluating the joint and conditional densities on the entire image is costly, we approximated p using RIDE applied to a 19 by 19 pixel patch surrounding ij. Randomly flipping images vertically or horizontally in between the sampling further helped. Figure 4 shows results after 100 Gibbs sampling sweeps. 4 Conclusion We have introduced RIDE, a deep but tractable recurrent image model based on spatial LSTMs. The model exemplifies how recent insights in deep learning can be exploited for generative image 7 Figure 4: The center portion of a texture (left and center) was reconstructed by sampling from the posterior distribution of RIDE (right). modeling and shows superior performance in quantitative comparisons. RIDE is able to capture many different statistical patterns, as demonstrated through its application to textures. This is an important property considering that on an intermediate level of abstraction natural images can be viewed as collections of textures. We have furthermore introduced a factorized version of the MCGSM which allowed us to use more experts and larger causal neighborhoods. This model has few parameters, is easy to train and already on its own performs very well as an image model. It is therefore an ideal building block and may be used to extend other models such as DRAW [11] or video models [31, 37]. Deep generative image models have come a long way since deep belief networks have first been applied to natural images [29]. Unlike convolutional neural networks in object recognition, however, no approach has as of yet proven to be a likely solution to the problem of generative image modeling. Further conceptual work will be necessary to come up with a model which can handle both the more abstract high-level as well as the low-level statistics of natural images. Acknowledgments The authors would like to thank A¨aron van den Oord for insightful discussions and Wieland Brendel, Christian Behrens, and Matthias K¨ummerer for helpful input on this paper. This study was financially supported by the German Research Foundation (DFG; priority program 1527, BE 3848/2-1). References [1] A. J. Bell and T. J. Sejnowski. The “independent components” of natural scenes are edge filters. Vision Research, 37(23):3327–3338, 1997. [2] P. Brodatz. Textures: A Photographic Album for Artists and Designers. Dover, New York, 1966. URL http://www.ux.uis.no/˜tranden/brodatz.html. [3] T. Cover and J. Thomas. Elements of Information Theory. Wiley, 2nd edition, 2006. [4] E. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks. In Advances in Neural Information Processing Systems 28, 2015. [5] J. Domke, A. Karapurkar, and Y. Aloimonos. Who killed the directed model? In CVPR, 2008. [6] J. Donahue, Y. Jia, O. Vinyals, J. Hoffman, N. Zhang, E. Tzeng, and T. Darrell. DeCAF: A deep convolutional activation feature for generic visual recognition. In ICML 31, 2014. [7] H. E. Gerhard, L. Theis, and M. Bethge. Modeling natural image statistics. In Biologically-inspired Computer Vision—Fundamentals and Applications. Wiley VCH, 2015. [8] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems 27, 2014. [9] A. Graves and J. Schmidhuber. Offline handwriting recognition with multidimensional recurrent neural networks. In Advances in Neural Information Processing Systems 22, 2009. [10] K. Gregor, I. Danihelka, A. Mnih, C. Blundell, and D. Wierstra. Deep AutoRegressive Networks. In Proceedings of the 31st International Conference on Machine Learning, 2014. [11] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. DRAW: A recurrent neural network for image generation. In Proceedings of the 32nd International Conference on Machine Learning, 2015. [12] N. Heess, C. Williams, and G. E. Hinton. Learning generative texture models with extended fields-ofexperts. In BMCV, 2009. [13] G. Hinton, S. Osindero, and Y. Teh. A fast learning algorithm for deep belief nets. Neural Comp., 2006. [14] S. Hochreiter and J. Schmidhuber. Long short-term memory. Neural Computation, 9(8), 1997. [15] R. Hosseini, F. Sinz, and M. Bethge. Lower bounds on the redundancy of natural images. Vis. Res., 2010. [16] A. Hyv¨arinen and P. O. Hoyer. Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation, 12(7):1705—-1720, 2000. 8 [17] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding, 2014. arXiv:1408.5093. [18] D. P. Kingma and M. Welling. Auto-encoding variational Bayes. In ICLR, 2014. [19] D. P. Kingma, D. J. Rezende, S. Mohamed, and M. Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems 27, 2014. [20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. ImageNet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25, 2012. [21] H. Larochelle and I. Murray. The neural autoregressive distribution estimator. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011. [22] A. B. Lee, D. Mumford, and J. Huang. Occlusion models for natural images: A statistical study of a scale-invariant dead leaves model. International Journal of Computer Vision, 2001. [23] H. Lee, R. Grosse, R. Ranganath, and A. Y. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In ICML 26, 2009. [24] Y. Li, K. Swersky, and R. Zemel. Generative moment matching networks. In ICML 32, 2015. [25] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In ICCV, 2001. [26] G. Matheron. Modele s´equential de partition al´eatoire. Technical report, CMM, 1968. [27] R. M. Neal. Connectionist learning of belief networks. Artificial Intelligence, 56:71–113, 1992. [28] J. Ngiam, Z. Chen, P. W. Koh, and A. Y. Ng. Learning deep energy models. In ICML 28, 2011. [29] S. Osindero and G. E. Hinton. Modelling image patches with a directed hierarchy of markov random fields. In Advances In Neural Information Processing Systems 20, 2008. [30] M. A. Ranzato, J. Susskind, V. Mnih, and G. E. Hinton. On deep generative models with applications to recognition. In IEEE Conference on Computer Vision and Pattern Recognition, 2011. [31] M. A. Ranzato, A. Szlam, J. Bruna, M. Mathieu, R. Collobert, and S. Chopra. Video (language) modeling: a baseline for generative models of natural videos, 2015. arXiv:1412.6604v2. [32] A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical report, Cambridge University, 1987. [33] S. Roth and M. J. Black. Fields of experts. International Journal of Computer Vision, 82(2), 2009. [34] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Represenations, 2015. [35] J. Sohl-Dickstein, E. A. Weiss, N. Maheswaranathan, and S. Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In ICML 32, 2015. [36] N. Srivastava and R. Salakhutdinov. Multimodal learning with deep Boltzmann machines. JMLR, 2014. [37] N. Srivastava, E. Mansimov, and R. Salakhutdinov. Unsupervised learning of video representations using LSTMs. In Proceedings of the 32nd International Conference on Machine Learning, 2015. [38] M. Sundermeyer, R. Schluter, and H. Ney. LSTM neural networks for language modeling. In INTERSPEECH, 2010. [39] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27, 2014. [40] L. Theis, S. Gerwinn, F. Sinz, and M. Bethge. In all likelihood, deep belief is not enough. JMLR, 2011. [41] L. Theis, R. Hosseini, and M. Bethge. Mixtures of conditional Gaussian scale mixtures applied to multiscale image representations. PLoS ONE, 7(7), 2012. [42] L. Theis, J. Sohl-Dickstein, and M. Bethge. Training sparse natural image models with a fast Gibbs sampler of an extended state space. In Advances in Neural Information Processing Systems 25, 2012. [43] L. Tierney. Markov chains for exploring posterior distributions. The Annals of Statistics, 1994. [44] B. Uria, I. Murray, and H. Larochelle. RNADE: the real-valued neural autoregressive density-estimator. In Advances in Neural Information Processing Systems 26, 2013. [45] B. Uria, I. Murray, and H. Larochelle. A deep and tractable density estimator. In ICML 31, 2014. [46] A. van den Oord and B. Schrauwen. The student-t mixture as a natural image patch prior with application to image compression. Journal of Machine Learning Research, 15(1):2061–2086, 2014. [47] A. van den Oord and B. Schrauwen. Factoring variations in natural images with deep Gaussian mixture models. In Advances in Neural Information Processing Systems 27, 2014. [48] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. of the Royal Society B: Biological Sciences, 265(1394), 1998. [49] D. Zoran and Y. Weiss. From learning models of natural image patches to whole image restoration. In IEEE International Conference on Computer Vision, 2011. [50] D. Zoran and Y. Weiss. Natural images, Gaussian mixtures and dead leaves. In NIPS 25, 2012. 9 | 2015 | 71 |
5,963 | Sparse PCA via Bipartite Matchings Megasthenis Asteris The University of Texas at Austin megas@utexas.edu Dimitris Papailiopoulos University of California, Berkeley dimitrisp@berkeley.edu Anastasios Kyrillidis The University of Texas at Austin anastasios@utexas.edu Alexandros G. Dimakis The University of Texas at Austin dimakis@austin.utexas.edu Abstract We consider the following multi-component sparse PCA problem: given a set of data points, we seek to extract a small number of sparse components with disjoint supports that jointly capture the maximum possible variance. Such components can be computed one by one, repeatedly solving the single-component problem and deflating the input data matrix, but this greedy procedure is suboptimal. We present a novel algorithm for sparse PCA that jointly optimizes multiple disjoint components. The extracted features capture variance that lies within a multiplicative factor arbitrarily close to 1 from the optimal. Our algorithm is combinatorial and computes the desired components by solving multiple instances of the bipartite maximum weight matching problem. Its complexity grows as a low order polynomial in the ambient dimension of the input data, but exponentially in its rank. However, it can be effectively applied on a low-dimensional sketch of the input data. We evaluate our algorithm on real datasets and empirically demonstrate that in many cases it outperforms existing, deflation-based approaches. 1 Introduction Principal Component Analysis (PCA) reduces data dimensionality by projecting it onto principal subspaces spanned by the leading eigenvectors of the sample covariance matrix. It is one of the most widely used algorithms with applications ranging from computer vision, document clustering to network anomaly detection (see e.g. [1, 2, 3, 4, 5]). Sparse PCA is a useful variant that offers higher data interpretability [6, 7, 8] a property that is sometimes desired even at the cost of statistical fidelity [5]. Furthermore, when the obtained features are used in subsequent learning tasks, sparsity potentially leads to better generalization error [9]. Given a real n × d data matrix S representing n centered data points in d variables, the first sparse principal component is the sparse vector that maximizes the explained variance: x⋆≜ arg max ∥x∥2=1,∥x∥0=s x⊤Ax, (1) where A = 1/n · S⊤S is the d × d empirical covariance matrix. Unfortunately, the directly enforced sparsity constraint makes the problem NP-hard and hence computationally intractable in general. A significant volume of prior work has focused on various algorithms for approximately solving this optimization problem [3, 5, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17], while some theoretical results have also been established under statistical or spectral assumptions on the input data. In most cases one is not interested in finding only the first sparse eigenvector, but rather the first k, where k is the reduced dimension where the data will be projected. Contrary to the single-component 1 problem, there has been very limited work on computing multiple sparse components. The scarcity is partially attributed to conventional wisdom stemming from PCA: multiple components can be computed one by one, repeatedly solving the single-component sparse PCA problem (1) and deflating [18] the input data to remove information captured by previously extracted components. In fact, multi-component sparse PCA is not a uniquely defined problem in the literature. Deflation-based approaches can lead to different output depending on the type of deflation [18]; extracted components may or may not be orthogonal, while they may have disjoint or overlapping supports. In the statistics literature, where the objective is typically to recover a “true” principal subspace, a branch of work has focused on the “subspace row sparsity” [19], an assumption that leads to sparse components all supported on the same set of variables. While in [20] the authors discuss an alternative perspective on the fundamental objective of the sparse PCA problem. We focus on the multi-component sparse PCA problem with disjoint supports, i.e., the problem of computing a small number of sparse components with non-overlapping supports that jointly maximize the explained variance: X⋆≜arg max X∈Xk TR X⊤AX , (2) Xk ≜ X ∈Rd×k : ∥Xj∥2 = 1, ∥Xj∥0 = s, supp(Xi) ∩supp(Xj) = ∅, ∀j ∈[k], i < j , with Xj denoting the jth column of X. The number k of the desired components is considered a small constant. Contrary to the greedy sequential approach that repeatedly uses deflation, our algorithm jointly computes all the vectors in X and comes with theoretical approximation guarantees. Note that even if we could solve the single-component sparse PCA problem (1) exactly, the greedy approach could be highly suboptimal. We show this with a simple example in Sec. 7 of the appendix. Our Contributions: 1. We develop an algorithm that provably approximates the solution to the sparse PCA problem (2) within a multiplicative factor arbitrarily close to optimal. Our algorithm is the first that jointly optimizes multiple components with disjoint supports and operates by recasting the sparse PCA problem into multiple instances of the bipartite maximum weight matching problem. 2. The computational complexity of our algorithm grows as a low order polynomial in the ambient dimension d, but is exponential in the intrinsic dimension of the input data, i.e., the rank of A. To alleviate the impact of this dependence, our algorithm can be applied on a low-dimensional sketch of the input data to obtain an approximate solution to (2). This extra level of approximation introduces an additional penalty in our theoretical approximation guarantees, which naturally depends on the quality of the sketch and, in turn, the spectral decay of A. 3. We empirically evaluate our algorithm on real datasets, and compare it against state-of-the-art methods for the single-component sparse PCA problem (1) in conjunction with the appropriate deflation step. In many cases, our algorithm significantly outperforms these approaches. 2 Our Sparse PCA Algorithm We present a novel algorithm for the sparse PCA problem with multiple disjoint components. Our algorithm approximately solves the constrained maximization (2) on a d × d rank-r Positive SemiDefinite (PSD) matrix A within a multiplicative factor arbitrarily close to 1. It operates by recasting the maximization into multiple instances of the bipartite maximum weight matching problem. Each instance ultimately yields a feasible solution to the original sparse PCA problem; a set of k s-sparse components with disjoint supports. Finally, the algorithm exhaustively determines and outputs the set of components that maximizes the explained variance, i.e., the quadratic objective in (2). The computational complexity of our algorithm grows as a low order polynomial in the ambient dimension d of the input, but exponentially in its rank r. Despite the unfavorable dependence on the rank, it is unlikely that a substantial improvement can be achieved in general [21]. However, decoupling the dependence on the ambient and the intrinsic dimension of the input has an interesting ramification; instead of the original input A, our algorithm can be applied on a low-rank surrogate to obtain an approximate solution, alleviating the dependence on r. We discuss this in Section 3. In the sequel, we describe the key ideas behind our algorithm, leading up to its guarantees in Theorem 1. 2 Let A = UΛU⊤denote the truncated eigenvalue decomposition of A; Λ is a diagonal r × r whose ith diagonal entry is equal to the ith largest eigenvalue of A, while the columns of U coincide with the corresponding eigenvectors. By the Cauchy-Schwartz inequality, for any x ∈Rd, x⊤Ax =
Λ1/2U⊤x
2 2 ≥ Λ1/2U⊤x, c 2, ∀c ∈Rr : ∥c∥2 = 1. (3) In fact, equality in (3) can always be achieved for c colinear to Λ1/2Ux ∈Rr and in turn x⊤Ax = max c∈Sr−1 2 x, UΛ1/2c 2, where Sr−1 2 denotes the ℓ2-unit sphere in r dimensions. More generally, for any X ∈Rd×k, TR X⊤AX = k X j=1 Xj⊤AXj = max C:Cj∈Sr−1 2 ∀j k X j=1 Xj, UΛ1/2Cj2. (4) Under the variational characterization of the trace objective in (4), the sparse PCA problem (2) can be re-written as a joint maximization over the variables X and C as follows: max X∈Xk TR X⊤AX = max X∈Xk max C:Cj∈Sr−1 2 ∀j k X j=1 Xj, UΛ1/2Cj2. (5) The alternative formulation of the sparse PCA problem in (5) may be seemingly more complicated than the original one in (2). However, it takes a step towards decoupling the dependence of the optimization on the ambient and intrinsic dimensions d and r, respectively. The motivation behind the introduction of the auxiliary variable C will become more clear in the sequel. For a given C, the value of X ∈Xk that maximizes the objective in (5) for that C is bX ≜arg max X∈Xk k X j=1 Xj, Wj2 , (6) where W≜UΛ1/2C is a real d × k matrix. The constrained, non-convex maximization (6) plays a central role in our developments. We will later describe a combinatorial O(d · (s · k)2) procedure to efficiently compute bX, reducing the maximization to an instance of the bipartite maximum weight matching problem. For now, however, let us assume that such a procedure exists. Let X⋆, C⋆be the pair that attains the maximum in (5); in other words, X⋆is the desired solution to the sparse PCA problem. If the optimal value C⋆of the auxiliary variable were known, then we would be able to recover X⋆by solving the maximization (6) for C = C⋆. Of course, C⋆is not known, and it is not possible to exhaustively consider all possible values in the domain of C. Instead, we examine only a finite number of possible values of C over a fine discretization of its domain. In particular, let Nǫ/2(Sr−1 2 ) denote a finite ǫ/2-net of the r-dimensional ℓ2-unit sphere; for any point in Sr−1 2 , the net contains a point within an ǫ/2 radius from the former. There are several ways to construct such a net. Further, let [Nǫ/2(Sr−1 2 )]⊗k ⊂Rd×k denote the kth Cartesian power of the aforementioned ǫ/2-net. By construction, this collection of points contains a matrix C that is column-wise close to C⋆. In turn, it can be shown using the properties of the net, that the candidate solution X ∈Xk obtained through (6) at that point C will be approximately as good as the optimal X⋆in terms of the quadratic objective in (2). All above observations yield a procedure for approximately solving the sparse PCA problem (2). The steps are outlined in Algorithm 1. Given the desired number of components k and an accuracy parameter ǫ ∈(0, 1), the algorithm generates a net [Nǫ/2(Sr−1 2 )]⊗k and iterates over its points. At each point C, it computes a feasible solution for the sparse PCA problem – a set of k s-sparse components – by solving maximization (6) via a procedure (Alg. 2) that will be described in the sequel. The algorithm collects the candidate solutions identified at the points of the net. The best among them achieves an objective in (2) that provably lies close to optimal. More formally, Theorem 1. For any real d × d rank-r PSD matrix A, desired number of components k, number s of nonzero entries per component, and accuracy parameter ǫ ∈(0, 1), Algorithm 1 outputs X ∈Xk such that TR X ⊤AX ≥(1 −ǫ) · TR X⊤ ⋆AX⋆ , where X⋆≜arg maxX∈Xk TR X⊤AX , in time TSVD(r) + O 4 ǫ r·k · d · (s · k)2 . 3 Algorithm 1 Sparse PCA (Multiple disjoint components) input : PSD d × d rank-r matrix A, ǫ ∈(0, 1), k ∈Z+. output : X ∈Xk {Theorem 1} 1: C ←{} 2: [U, Λ] ←EIG(A) 3: for each C ∈[Nǫ/2(Sr−1 2 )]⊗k do 4: W ←UΛ1/2C {W ∈Rd×k} 5: bX ←arg maxX∈Xk Pk j=1 Xj, Wj2 {Alg. 2} 6: C ←C ∪ bX 7: end for 8: X ←arg maxX∈C TR X⊤AX Algorithm 1 is the first nontrivial algorithm that provably approximates the solution of the sparse PCA problem (2). According to Theorem 1, it achieves an objective value that lies within a multiplicative factor from the optimal, arbitrarily close to 1. Its complexity grows as a low-order polynomial in the dimension d of the input, but exponentially in the intrinsic dimension r. Note, however, that it can be substantially better compared to the O(ds·k) brute force approach that exhaustively considers all candidate supports for the k sparse components. The complexity of our algorithm follows from the cardinality of the net and the complexity of Algorithm 2, the subroutine that solves the constrained maximization (6). The latter is a key ingredient of our algorithm, and is discussed in detail in the next subsection. A formal proof of Theorem 1 is provided in Section 9.2. 2.1 Sparse Components via Bipartite Matchings In the core of Alg. 1 lies a procedure that solves the constrained maximization (6) (Alg. 2). The latter breaks down the maximization into two stages. First, it identifies the support of the optimal solution bX by solving an instance of the maximum weight matching problem on a bipartite graph G. Then, it recovers the exact values of its nonzero entries based on the Cauchy-Schwarz inequality. In the sequel, we provide a brief description of Alg. 2, leading up to its guarantees in Lemma 2.1. Let Ij≜supp( bXj) be the support of the jth column of bX, j = 1, . . . , k. The objective in (6) becomes k X j=1 bXj, Wj2 = k X j=1 X i∈Ij b Xij · Wij 2 ≤ k X j=1 X i∈Ij W 2 ij. (7) The inequality is due to Cauchy-Schwarz and the constraint ∥Xj∥2 = 1 ∀j ∈{1, . . . , k}. In fact, if an oracle reveals the supports Ij, j = 1, . . . , k, the upper bound in (7) can always be achieved by setting the nonzero entries of bX as in Algorithm 2 (Line 6). Therefore, the key in solving (6) is determining the collection of supports to maximize the right-hand side of (7). u (1) 1 u(1) s ... u (k) 1 u(k) s ... v1 vd vi ... ... ... W 2 i1 W 2 i1 W 2 ik W 2 ik U1 Uk V Figure 1: The graph G generated by Alg. 2. It is used to determine the support of the solution bX in (6). By constraint, the sets Ij must be pairwise disjoint, each with cardinality s. Consider a weighted bipartite graph G = U = {U1, . . . , Uk}, V, E constructed as follows1 (Fig. 1): • V is a set of d vertices v1, . . . , vd, corresponding to the d variables, i.e., the d rows of bX. • U is a set of k · s vertices, conceptually partitioned into k disjoint subsets U1, . . . , Uk, each of cardinality s. The jth subset, Uj, is associated with the support Ij; the s vertices u(j) α , α = 1, . . . , s in Uj serve as placeholders for the variables/indices in Ij. • Finally, the edge set is E = U × V . The edge weights are determined by the d×k matrix W in (6). In particular, the weight of edge (u(j) α , vi) is equal to W 2 ij. Note that all vertices in Uj are effectively identical; they all share a common neighborhood and edge weights. 1The construction is formally outlined in Algorithm 4 in Section 8. 4 Algorithm 2 Compute Candidate Solution input Real d × k matrix W output bX = arg maxX∈Xk Pk j=1 Xj, Wj2 1: G {Uj}k j=1, V, E ←GENBIGRAPH(W) {Alg. 4} 2: M ←MAXWEIGHTMATCH(G) {⊂E} 3: bX ←0d×k 4: for j = 1, . . . , k do 5: Ij ←{i ∈{1, . . . , d} : (u, vi) ∈M, u ∈Uj} 6: [ bXj]Ij ←[Wj]Ij/∥[Wj]Ij∥2 7: end for Any feasible support {Ij}k j=1 corresponds to a perfect matching in G and vice-versa. Recall that a matching is a subset of the edges containing no two edges incident to the same vertex, while a perfect matching, in the case of an unbalanced bipartite graph G = (U, V, E) with |U| ≤|V |, is a matching that contains at least one incident edge for each vertex in U. Given a perfect matching M ⊆E, the disjoint neighborhoods of Ujs under M yield a support {Ij}k j=1. Conversely, any valid support yields a unique perfect matching in G (taking into account that all vertices in Uj are isomorphic). Moreover, due to the choice of weights in G, the right-hand side of (7) for a given support {Ij}k j=1 is equal to the weight of the matching M in G induced by the former, i.e., Pk j=1 P i∈Ij W 2 ij=P (u,v)∈M w(u, v). It follows that determining the support of the solution in (6), reduces to solving the maximum weight matching problem on the bipartite graph G. Algorithm 2 readily follows. Given W ∈Rd×k, the algorithm generates a weighted bipartite graph G as described, and computes its maximum weight matching. Based on the latter, it first recovers the desired support of bX (Line 5), and subsequently the exact values of its nonzero entries (Line 6). The running time is dominated by the computation of the matching, which can be done in O |E||U| + |U|2 log |U| using a variant of the Hungarian algorithm [22]. Hence, Lemma 2.1. For any W ∈Rd×k, Algorithm 2 computes the solution to (6), in time O d · (s · k)2 . A more formal analysis and proof of Lemma 2.1 is available in Sec. 9.1. This completes the description of our sparse PCA algorithm (Alg. 1) and the proof sketch of Theorem 1. 3 Sparse PCA on Low-Dimensional Sketches Algorithm 3 Sparse PCA on Low Dim. Sketch input : Real n × d S, r ∈Z+, ǫ ∈(0, 1), k ∈Z+. output X(r) ∈Xk. {Thm. 2} 1: S ←SKETCH(S, r) 2: A ←S ⊤S 3: X(r) ←ALGORITHM 1 (A, ǫ, k). Algorithm 1 approximately solves the sparse PCA problem (2) on a d × d rank-r PSD matrix A in time that grows as a low-order polynomial in the ambient dimension d, but depends exponentially on r. This dependence can be prohibitive in practice. To mitigate its effect, we can apply our sparse PCA algorithm on a low-rank sketch of A. Intuitively, the quality of the extracted components should depend on how well that low-rank surrogate approximates the original input. More formally, let S be the real n × d data matrix representing n (potentially centered) datapoints in d variables, and A the corresponding d×d covariance matrix. Further, let S be a low-dimensional sketch of the original data; an n × d matrix whose rows lie in an r-dimensional subspace, with r being an accuracy parameter. Such a sketch can be obtained in several ways, including for example exact or approximate SVD, or online sketching methods [23]. Finally, let A = 1/n · S ⊤S be the covariance matrix of the sketched data. Then, instead of A, we can approximately solve the sparse PCA problem by applying Algorithm 1 on the low-rank surrogate A. The above are formally outlined in Algorithm 3. We note that the covariance matrix A does not need to be explicitly computed; Algorithm 1 can operate directly on the (sketched) input data matrix. Theorem 2. For any n × d input data matrix S, with corresponding empirical covariance matrix A = 1/n · S⊤S, any desired number of components k, and accuracy parameters ǫ ∈(0, 1) and r, Algorithm 3 outputs X(r) ∈Xk such that TR X⊤ (r)AX(r) ≥(1 −ǫ) · TR X⊤ ⋆AX⋆ −2 · k · ∥A −A∥2, where X⋆≜arg maxX∈Xk TR X⊤AX , in time TSKETCH(r) + TSVD(r) + O 4 ǫ r·k · d · (s · k)2 . 5 The error term ∥A −A∥2 and in turn the tightness of the approximation guarantees hinges on the quality of the sketch. Roughly, higher values of the parameter r should allow for a sketch that more accurately represents the original data, leading to tighter guarantees. That is the case, for example, when the sketch is obtained through exact SVD. In that sense, Theorem 2 establishes a natural trade-off between the running time of Algorithm 3 and the quality of the approximation guarantees. (See [24] for additional results.) A formal proof of Theorem 2 is provided in Appendix Section 9.3. 4 Related Work A significant volume of work has focused on the single-component sparse PCA problem (1); we scratch the surface and refer the reader to citations therein. Representative examples range from early heuristics in [7], to the LASSO based techniques in [8], the elastic net ℓ1-regression in [5], ℓ1 and ℓ0 regularized optimization methods such as GPower in [10], a greedy branch-and-bound technique in [11], or semidefinite programming approaches [3, 12, 13]. Many focus on a statistical analysis that pertains to specific data models and the recovery of a “true” sparse component. In practice, the most competitive results in terms of the maximization in (1) seem to be achieved by (i) the simple and efficient truncated power (TPower) iteration of [14], (ii) the approach of [15] stemming from an expectation-maximization (EM) formulation, and (iii) the (SpanSPCA) framework of [16] which solves the sparse PCA problem through low rank approximations based on [17]. We are not aware of any algorithm that explicitly addresses the multi-component sparse PCA problem (2). Multiple components can be extracted by repeatedly solving (1) with one of the aforementioned methods. To ensure disjoint supports, variables “selected” by a component are removed from the dataset. However, this greedy approach can result in highly suboptimal objective value (see Sec. 7). More generally, there has been relatively limited work in the estimation of principal subspaces or multiple components under sparsity constraints. Non-deflation-based algorithms include extensions of the diagonal [25] and iterative thresholding [26] approaches, while [27] and [28] propose methods that rely on the “row sparsity for subspaces” assumption of [19]. These methods yield components supported on a common set of variables, and hence solve a problem different from (2). In [20], the authors discuss the multi-component sparse PCA problem, propose an alternative objective function and for that problem obtain interesting theoretical guarantees. In [29] they consider a structured variant of sparse PCA where higher-order structure is encoded by an atomic norm regularization. Finally, [30] develops a framework for sparse matrix factorizaiton problems, based on an atomic norm. Their framework captures sparse PCA –although not explicitly the constraint of disjoint supports– but the resulting optimization problem, albeit convex, is NP-hard. 5 Experiments We evaluate our algorithm on a series of real datasets, and compare it to deflation-based approaches for sparse PCA using TPower [14], EM [15], and SpanSPCA [16]. The latter are representative of the state of the art for the single-component sparse PCA problem (1). Multiple components are computed one by one. To ensure disjoint supports, the deflation step effectively amounts to removing from the dataset all variables used by previously extracted components. For algorithms that are randomly initialized, we depict best results over multiple random restarts. Additional experimental results are listed in Section 11 of the appendix. Our experiments are conducted in a Matlab environment. Due to its nature, our algorithm is easily parallelizable; its prototypical implementation utilizes the Parallel Pool Matlab feature to exploit multicore (or distributed cluster) capabilities. Recall that our algorithm operates on a low-rank approximation of the input data. Unless otherwise specified, it is configured for a rank-4 approximation obtained via truncated SVD. Finally, we note that our algorithm is slower than the deflation-based methods. We set a barrier on the execution time of our algorithm at the cost of the theoretical approximation guarantees; the algorithm returns the best result at the time of termination. This “early termination” can only hurt the performance of our algorithm. Leukemia Dataset. We evaluate our algorithm on the Leukemia dataset [31]. The dataset comprises 72 samples, each consisting of expression values for 12582 probe sets. We extract k = 5 sparse components, each active on s = 50 features. In Fig. 2(a), we plot the cumulative explained variance versus the number of components. Deflation-based approaches are greedy: the leading 6 Number of Components 1 2 3 4 5 Cumulative Expl. Variance #109 0 1 2 3 4 5 6 +8:82% k = 5 components, s = 50 nnz/component TPower EM-SPCA SpanSPCA SPCABiPart (a) Number of target components 2 3 4 5 6 7 8 9 10 Total Cumulative Expl. Variance #109 0 1 2 3 4 5 6 7 8 +0.48% +6.51% +8.71% +8.82% +6.80% +7.87% +6.39% +6.67% +6.88% s = 50 nnz/component SPCABiPart SpanSPCA EM-SPCA TPower (b) Figure 2: Cumul. variance captured by k s-sparse extracted components; Leukemia dataset [31]. We arbitrarily set s = 50 nonzero entries per component. Fig. 2(a) depicts the cumul. variance vs the number of components, for k = 5. Deflation-based approaches are greedy; first components capture high variance, but subsequent contribute less. Our algorithm jointly optimizes the k components and achieves higher objective. Fig. 2(b) depicts the cumul. variance achieved for various values of k. components capture high values of variance, but subsequent ones contribute less. On the contrary, our algorithm jointly optimizes the k = 5 components and achieves higher total cumulative variance; one cannot identify a top component. We repeat the experiment for multiple values of k. Fig. 2(b) depicts the total cumulative variance capture by each method, for each value of k. Additional Datasets. We repeat the experiment on multiple datasets, arbitrarily selected from [31]. Table 1 lists the total cumulative variance captured by k = 5 components, each with s = 40 nonzero entries, extracted using the four methods. Our algorithm achieves the highest values in most cases. Bag of Words (BoW) Dataset. [31] This is a collection of text corpora stored under the “bag-ofwords” model. For each text corpus, a vocabulary of d words is extracted upon tokenization, and the removal of stopwords and words appearing fewer than ten times in total. Each document is then represented as a vector in that d-dimensional space, with the ith entry corresponding to the number of appearances of the ith vocabulary entry in the document. We solve the sparse PCA problem (2) on the word-by-word cooccurrence matrix, and extract k = 8 sparse components, each with cardinality s = 10. We note that the latter is not explicitly constructed; our algorithm can operate directly on the input word-by-document matrix. Table 2 lists the variance captured by each method; our algorithm consistently outperforms the other approaches. Finally, note that here each sparse component effectively selects a small set of words. In turn, the k extracted components can be interpreted as a set of well-separated topics. In Table 3, we list the TPower EM sPCA SpanSPCA SPCABiPart AMZN COM REV (1500×10000) 7.31e + 03 7.32e + 03 7.31e + 03 7.79e + 03 ARCENCE TRAIN (100×10000) 1.08e + 07 1.02e + 07 1.08e + 07 1.10e + 07 CBCL FACE TRAIN (2429×361) 5.06e + 00 5.18e + 00 5.23e + 00 5.29e + 00 ISOLET-5 (1559×617) 3.31e + 01 3.43e + 01 3.34e + 01 3.51e + 01 LEUKEMIA (72×12582) 5.00e + 09 5.03e + 09 4.84e + 09 5.37e + 09 PEMS TRAIN (267×138672) 3.94e + 00 3.58e + 00 3.89e + 00 3.75e + 00 MFEAT PIX (2000×240) 5.00e + 02 5.27e + 02 5.08e + 02 5.47e + 02 Table 1: Total cumulative variance captured by k = 5 40-sparse extracted components on various datasets [31]. For each dataset, we list the size (#samples×#variables) and the value of variance captured by each method. Our algorithm operates on a rank-4 sketch in all cases. 7 TPower EM sPCA SpanSPCA SPCABiPart BOW:NIPS (1500×12419) 2.51e + 03 2.57e + 03 2.53e + 03 3.34e + 03 (+29.98%) BOW:KOS (3430×6906) 4.14e + 01 4.24e + 01 4.21e + 01 6.14e + 01 (+44.57%) BOW:ENRON (39861×28102) 2.11e + 02 2.00e + 02 2.09e + 02 2.38e + 02 (+12.90%) BOW:NYTIMES (300000×102660) 4.81e + 01 − 4.81e + 01 5.31e + 01 (+10.38%) Table 2: Total variance captured by k = 8 extracted components, each with s = 15 nonzero entries – Bag of Words dataset [31]. For each corpus, we list the size (#documents×#vocabulary-size) and the explained variance. Our algorithm operates on a rank-5 sketch in all cases. topics extracted from the NY Times corpus (part of the Bag of Words dataset). The corpus consists of 3 · 105 news articles and a vocabulary of d = 102660 words. 6 Conclusions We considered the sparse PCA problem for multiple components with disjoint supports. Existing methods for the single component problem can be used along with an appropriate deflation step to compute multiple components one by one, leading to potentially suboptimal results. We presented a novel algorithm for jointly computing multiple sparse and disjoint components with provable approximation guarantees. Our algorithm is combinatorial and exploits interesting connections between the sparse PCA and the bipartite maximum weight matching problems. Its running time grows as a low-order polynomial in the ambient dimension of the input data, but depends exponentially on its rank. To alleviate this dependency, we can apply the algorithm on a low-dimensional sketch of the input, at the cost of an additional error in our theoretical approximation guarantees. Empirical evaluation showed that in many cases our algorithm outperforms deflation-based approaches. Acknowledgments DP is generously supported by NSF awards CCF-1217058 and CCF-1116404 and MURI AFOSR grant 556016. This research has been supported by NSF Grants CCF 1344179, 1344364, 1407278, 1422549 and ARO YIP W911NF-14-1-0258. References [1] A. Majumdar, “Image compression by sparse pca coding in curvelet domain,” Signal, image and video processing, vol. 3, no. 1, pp. 27–34, 2009. [2] Z. Wang, F. Han, and H. Liu, “Sparse principal component analysis for high dimensional multivariate time series,” in Proceedings of the Sixteenth International Conference on Artificial Intelligence and Statistics, pp. 48–56, 2013. [3] A. d’Aspremont, L. El Ghaoui, M. Jordan, and G. Lanckriet, “A direct formulation for sparse pca using semidefinite programming,” SIAM review, vol. 49, no. 3, pp. 434–448, 2007. Topic 1 Topic 2 Topic 3 Topic 4 Topic 5 Topic 6 Topic 7 Topic 8 1: percent zzz united states zzz bush company team cup school zzz al gore 2: million zzz u s official companies game minutes student zzz george bush 3: money zzz american government market season add children campaign 4: high attack president stock player tablespoon women election 5: program military group business play oil show plan 6: number palestinian leader billion point teaspoon book tax 7: need war country analyst run water family public 8: part administration political firm right pepper look zzz washington 9: problem zzz white house american sales home large hour member 10: com games law cost won food small nation Table 3: BOW:NYTIMES dataset [31]. The table lists the words corresponding to the s = 10 nonzero entries of each of the k = 8 extracted components (topics). Words corresponding to higher magnitude entries appear higher in the topic. 8 [4] R. Jiang, H. Fei, and J. Huan, “Anomaly localization for network data streams with graph joint sparse pca,” in Proceedings of the 17th ACM SIGKDD, pp. 886–894, ACM, 2011. [5] H. Zou, T. Hastie, and R. Tibshirani, “Sparse principal component analysis,” Journal of computational and graphical statistics, vol. 15, no. 2, pp. 265–286, 2006. [6] H. Kaiser, “The varimax criterion for analytic rotation in factor analysis,” Psychometrika, vol. 23, no. 3, pp. 187–200, 1958. [7] I. Jolliffe, “Rotation of principal components: choice of normalization constraints,” Journal of Applied Statistics, vol. 22, no. 1, pp. 29–35, 1995. [8] I. Jolliffe, N. Trendafilov, and M. Uddin, “A modified principal component technique based on the lasso,” Journal of Computational and Graphical Statistics, vol. 12, no. 3, pp. 531–547, 2003. [9] C. Boutsidis, P. Drineas, and M. Magdon-Ismail, “Sparse features for pca-like linear regression,” in Advances in Neural Information Processing Systems, pp. 2285–2293, 2011. [10] M. Journ´ee, Y. Nesterov, P. Richt´arik, and R. Sepulchre, “Generalized power method for sparse principal component analysis,” The Journal of Machine Learning Research, vol. 11, pp. 517–553, 2010. [11] B. Moghaddam, Y. Weiss, and S. Avidan, “Spectral bounds for sparse pca: Exact and greedy algorithms,” NIPS, vol. 18, p. 915, 2006. [12] A. d’Aspremont, F. Bach, and L. E. Ghaoui, “Optimal solutions for sparse principal component analysis,” The Journal of Machine Learning Research, vol. 9, pp. 1269–1294, 2008. [13] Y. Zhang, A. d’Aspremont, and L. Ghaoui, “Sparse pca: Convex relaxations, algorithms and applications,” Handbook on Semidefinite, Conic and Polynomial Optimization, pp. 915–940, 2012. [14] X.-T. Yuan and T. Zhang, “Truncated power method for sparse eigenvalue problems,” The Journal of Machine Learning Research, vol. 14, no. 1, pp. 899–925, 2013. [15] C. D. Sigg and J. M. Buhmann, “Expectation-maximization for sparse and non-negative pca,” in Proceedings of the 25th International Conference on Machine Learning, ICML ’08, (New York, NY, USA), pp. 960–967, ACM, 2008. [16] D. Papailiopoulos, A. Dimakis, and S. Korokythakis, “Sparse pca through low-rank approximations,” in Proceedings of The 30th International Conference on Machine Learning, pp. 747–755, 2013. [17] M. Asteris, D. S. Papailiopoulos, and G. N. Karystinos, “The sparse principal component of a constant-rank matrix,” Information Theory, IEEE Transactions on, vol. 60, pp. 2281–2290, April 2014. [18] L. Mackey, “Deflation methods for sparse pca,” NIPS, vol. 21, pp. 1017–1024, 2009. [19] V. Vu and J. Lei, “Minimax rates of estimation for sparse pca in high dimensions,” in International Conference on Artificial Intelligence and Statistics, pp. 1278–1286, 2012. [20] M. Magdon-Ismail and C. Boutsidis, “Optimal sparse linear auto-encoders and sparse pca,” arXiv preprint arXiv:1502.06626, 2015. [21] M. Magdon-Ismail, “Np-hardness and inapproximability of sparse PCA,” CoRR, vol. abs/1502.05675, 2015. [22] L. Ramshaw and R. E. Tarjan, “On minimum-cost assignments in unbalanced bipartite graphs,” HP Labs, Palo Alto, CA, USA, Tech. Rep. HPL-2012-40R1, 2012. [23] N. Halko, P.-G. Martinsson, and J. A. Tropp, “Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions,” SIAM review, vol. 53, no. 2, pp. 217–288, 2011. [24] M. Asteris, D. Papailiopoulos, A. Kyrillidis, and A. G. Dimakis, “Sparse pca via bipartite matchings,” arXiv preprint arXiv:1508.00625, 2015. [25] I. M. Johnstone and A. Y. Lu, “On consistency and sparsity for principal components analysis in high dimensions,” Journal of the American Statistical Association, vol. 104, no. 486, 2009. [26] Z. Ma, “Sparse principal component analysis and iterative thresholding,” The Annals of Statistics, vol. 41, no. 2, pp. 772–801, 2013. [27] V. Q. Vu, J. Cho, J. Lei, and K. Rohe, “Fantope projection and selection: A near-optimal convex relaxation of sparse pca,” in NIPS, pp. 2670–2678, 2013. [28] Z. Wang, H. Lu, and H. Liu, “Nonconvex statistical optimization: minimax-optimal sparse pca in polynomial time,” arXiv preprint arXiv:1408.5352, 2014. [29] R. Jenatton, G. Obozinski, and F. Bach, “Structured sparse principal component analysis,” in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, AISTATS, pp. 366–373, 2010. [30] E. Richard, G. R. Obozinski, and J.-P. Vert, “Tight convex relaxations for sparse matrix factorization,” in Advances in Neural Information Processing Systems, pp. 3284–3292, 2014. [31] M. Lichman, “UCI machine learning repository,” 2013. 9 | 2015 | 72 |
5,964 | Market Scoring Rules Act As Opinion Pools For Risk-Averse Agents Mithun Chakraborty, Sanmay Das Department of Computer Science and Engineering Washington University in St. Louis St. Louis, MO 63130 {mithunchakraborty,sanmay}@wustl.edu Abstract A market scoring rule (MSR) – a popular tool for designing algorithmic prediction markets – is an incentive-compatible mechanism for the aggregation of probabilistic beliefs from myopic risk-neutral agents. In this paper, we add to a growing body of research aimed at understanding the precise manner in which the price process induced by a MSR incorporates private information from agents who deviate from the assumption of risk-neutrality. We first establish that, for a myopic trading agent with a risk-averse utility function, a MSR satisfying mild regularity conditions elicits the agent’s risk-neutral probability conditional on the latest market state rather than her true subjective probability. Hence, we show that a MSR under these conditions effectively behaves like a more traditional method of belief aggregation, namely an opinion pool, for agents’ true probabilities. In particular, the logarithmic market scoring rule acts as a logarithmic pool for constant absolute risk aversion utility agents, and as a linear pool for an atypical budgetconstrained agent utility with decreasing absolute risk aversion. We also point out the interpretation of a market maker under these conditions as a Bayesian learner even when agent beliefs are static. 1 Introduction How should we combine opinions (or beliefs) about hidden truths (or uncertain future events) furnished by several individuals with potentially diverse information sets into a single group judgment for decision or policy-making purposes? This has been a fundamental question across disciplines for a long time (Surowiecki [2005]). One simple, principled approach towards achieving this end is the opinion pool (OP) which directly solicits inputs from informants in the form of probabilities (or distributions) and then maps this vector of inputs to a single probability (or distribution) based on certain axioms (Genest and Zidek [1986]). However, this technique abstracts away from the issue of providing proper incentives to a selfish-rational agent to reveal her private information honestly. Financial markets approach the problem differently, offering financial incentives for traders to supply their information about valuations and aggregating this information into informative prices. A prediction market is a relatively novel tool that builds upon this idea, offering trade in a financial security whose final monetary worth is tied to the future revelation of some currently unknown ground truth. Hanson [2003] introduced a family of algorithms for designing automated prediction markets called the market scoring rule (MSR) of which the Logarithmic Market Scoring Rule (LMSR) is arguably the most widely used and well-studied. A MSR effectively acts as a cost function-based market maker always willing to take the other side of a trade with any willing buyer or seller, and re-adjusting its quoted price after every transaction. One of the most attractive properties of a MSR is its incentive-compatibility for a myopic risk-neutral trader. But this also means that, every time a MSR trades with such an agent, the updated market 1 price is reset to the subjective probability of that agent; the market mechanism itself does not play an active role in unifying pieces of information gleaned from the entire trading history into its current price. Ostrovsky [2012] and Iyer et al. [2014] have shown that, with differentially informed Bayesian risk-neutral and risk-averse agents respectively, trading repeatedly, “information gets aggregated” in a MSR-based market in a perfect Bayesian equilibrium. However, if agent beliefs themselves do not converge, can the price process emerging out of their interaction with a MSR still be viewed as an aggeragator of information in some sense? Intuitively, even if an agent does not revise her belief based on her inference about her peers’ information from market history, her conservative attitude towards risk should compel her to trade in such a way as to move the market price not all the way to her private belief but to some function of her belief and the most recent price; thus, the evolving price should always retain some memory of all agents’ information sequentially injected into the market. Therefore, the assumption of belief-updating agents may not be indispensable for providing theoretical guarantees on how the market incorporates agent beliefs. A few attempts in this vein can be found in the literature, typically embedded in a broader context (Sethi and Vaughan [2015], Abernethy et al. [2014]), but there have been few general results; see Section 1.1 for a review. In this paper, we develop a new unified understanding of the information aggregation characteristics of a market with risk-averse agents mediated by a MSR, with no regard to how the agents’ beliefs are formed. In fact, we demonstrate an equivalence between such MSR-mediated markets and opinion pools. We do so by first proving, in Section 3, that for any MSR interacting with myopic risk-averse traders, the revised instantaneous price after every trade equals the latest trader’s risk-neutral probability conditional on the preceding market state. We then show that this price update rule satisfies an axiomatic characterization of opinion pooling functions from the literature, establishing the equivalence. In Sections 3.1, and 3.2, we focus on a specific MSR, the commonly used logarithmic variety (LMSR). We demonstrate that a LMSR-mediated market with agents having constant absolute risk aversion (CARA) utilities is equivalent to a logarithmic opinion pool, and that a LMSR-mediated market with budget-constrained agents having a specific concave utility with decreasing absolute risk aversion is equivalent to a linear opinion pool. We also demonstrate how the agents’ utility function parameters acquire additional significance with respect to this pooling operation, and that in these two scenarios the market maker can be interpreted as a Bayesian learning algorithm even if agents never update beliefs. Our results are reminiscent of similar findings about competitive equilibrium prices in markets with rational, risk-averse agents (Pennock [1999], Beygelzimer et al. [2012], Millin et al. [2012] etc.), but those models require that agents learn from prices and also abstract away from any consideration of microstructure and the dynamics of actual price formation (how the agents would reach the equilibrium is left open). By contrast, our results do not presuppose any kind of generative model for agent signals, and also do not involve an equilibrium analysis – hence they can be used as tools to analyze the convergence characteristics of the market price in non-equilibrium situations with potentially fixed-belief or irrational agents. 1.1 Related Work Given the plethora of experimental and empirical evidence that prediction markets are at least as effective as more traditional means of belief aggregation (Wolfers and Zitzewitz [2004], Cowgill and Zitzewitz [2013]), there has been considerable work on understanding how such a market formulates its own consensus belief from individual signals. An important line of research (Beygelzimer et al. [2012], Millin et al. [2012], Hu and Storkey [2014], Storkey et al. [2015]) has focused on a competitive equilibrium analysis of prediction markets under various trader models, and found an equivalence between the market’s equilibrium price and the outcome of an opinion pool with the same agents. Seminal work in this field was done by Pennock [1999] who showed that linear and logarithmic opinion pools arise as special cases of the equilibrium of his intuitive model of securities markets when all agents have generalized logarithmic and negative exponential utilities respectively. Unlike these analyses that abstract away from the microstructure, Ostrovsky [2012] and Iyer et al. [2014] show that certain market structures (including market scoring rules) satisfying mild conditions perform “information aggregation” (i.e. the market’s belief measure converges in probability to the ground truth) for repeatedly trading and learning agents with risk-neutral and risk-averse utilities respectively. Our contribution, while drawing inspiration from these sources, differs in that we delve into the characteristics of the evolution of the price rather than the properties of prices in equilibrium, and examine the manner in which the microstructure induces aggregation even if the agents are not Bayesian. While there has also been significant work on market properties in 2 continuous double auctions or markets mediated by sophisticated market-making algorithms (e.g. Cliff and Bruten [1997], Farmer et al. [2005], Brahma et al. [2012] and references therein) when the agents are “zero intelligence” or derivatives thereof (and therefore definitely not Bayesian), this line of literature has not looked at market scoring rules in detail, and analytical results have been rare. In recent years, the literature focusing on the market scoring rule (or, equivalently, the cost functionbased market maker) family has grown substantially. Chen and Vaughan [2010] and Frongillo et al. [2012] have uncovered isomorphisms between this type of market structure and well-known machine learning algorithms. We, on the other hand, are concerned with the similarities between price evolution in MSR-mediated markets and opinion pooling methods (see e.g. Garg et al. [2004]). Our work comes close to that of Sethi and Vaughan [2015] who show analytically that the price sequence of a cost function-based market maker with budget-limited risk-averse traders is “convergent under general conditions”, and by simulation that the limiting price of LMSR with multi-shot but myopic logarithmic utility agents is approximately a linear opinion pool of agent beliefs. Abernethy et al. [2014] show that a risk-averse exponential utility agent with an exponential family belief distribution updates the state vector of a generalization of LMSR that they propose to a convex combination of the current market state vector and the natural parameter vector of the agent’s own belief distribution (see their Theorem 5.2, Corollary 5.3) – this reduces to a logarithmic opinion pool (LogOP) for classical LMSR. The LMSR-LogOP connection was also noted by Pennock and Xia [2011] (in their Theorem 1) but with respect to an artificial probability distribution based on an agent’s observed trade that the authors defined instead of considering traders’ belief structure or strategies. We show how results of this type arise as special cases of a more general MSR-OP equivalence that we establish in this paper. 2 Model and definitions Consider a decision-maker or principal interested in the “opinions” / ”beliefs” / “forecasts” of a group of n agents about an extraneous random binary event X ∈{0, 1}, expressed in the form of point probabilities πi ∈(0, 1), i = 1, 2, . . . , n, i.e. πi is agent i’s subjective probability Pr(X = 1). X can represent a proposition such as “A Republican will win the next U.S. presidential election” or “The favorite will beat the underdog by more than a pre-determined point spread in a game of football” or “The next Avengers movie will hit a certain box office target in its opening week.” In this section, we briefly describe two approaches towards the aggregation of such private beliefs: (1) the opinion pool, which disregards the problem of incentivizing truthful reports, and focuses simply on unifying multiple probabilistic reports on a topic, and (2) the market scoring rule, an incentive-based mechanism for extracting honest beliefs from selfish-rational agents. 2.1 Opinion Pool (OP) This family of methods takes as input the vector of probabilistic reports pi, i = 1, 2, · · · , n submitted by n agents, also called experts in this context, and computes an aggregate or consensus operator bp = f(p1, p2, · · · , pn) ∈[0, 1]. Garg et al. [2004] identified three desiderata for an opinion pool (other criteria are also recognized in the literature, but the following are the most basic and natural): 1. Unanimity: If all experts agree, the aggregate also agrees with them. 2. Boundedness: The aggregate is bounded by the extremes of the inputs. 3. Monotonicity: If one expert changes her opinion in a particular direction while all other experts’ opinions remain unaltered, then the aggregate changes in the same direction. Definition 1. We call bp = f(p1, p2, · · · , pn) a valid opinion pool for n probabilistic reports if it possesses properties 1, 2, and 3 listed above. It is easy to derive the following result for recursively defined pooling functions that will prove useful for establishing an equivalence between market scoring rules and opinion pools. The proof is in Section 1 of the Supplementary Material. Lemma 1. For a two-outcome scenario, if f2(r1, r2) and fn−1(q1, q2, . . . , qn−1) are valid opinion pools for two probabilistic reports r1, r2 and n−1 probabilistic reports q1, q2, . . . , qn−1 respectively, then f(p1, p2, . . . , pn) = f2(fn−1(p1, p2, . . . , pn−1), pn) is also a valid opinion pool for n reports. 3 Two popular opinion pooling methods are the Linear Opinion Pool (LinOP) and the Logarithmic Opinion Pool (LogOP) which are essentially a weighted average (or convex combination) and a renormalized weighted geometric mean of the experts’ probability reports respectively. LinOP(p1, p2, · · · , pn)= Pn i=1 ωlin i pi, LogOP(p1, p2, · · · , pn)= Qn i=1 p ωlog i i . Qn i=1 p ωlog i i + Qn i=1(1 −pi)ωlog i , for a two-outcome scenario, where ωlin i , ωlog i ≥0 ∀i = 1, 2, . . . , n, Pn i=1 ωlin i = 1, Pn i=1 ωlog i = 1. 2.2 Market Scoring Rule (MSR) In general, a scoring rule is a function of two variables s(p, x) ∈R ∪{−∞, ∞}, where p is an agent’s probabilistic prediction (density or mass function) about an uncertain event, x is the realized or revealed outcome of that event after the prediction has been made, and the resulting value of s is the agent’s ex post compensation for prediction. For a binary event X, a scoring rule can just be represented by the pair (s1(p), s0(p)) which is the vector of agent compensations for {X = 1} and {X = 0} respectively, p ∈[0, 1] being the agent’s reported probability of {X = 1} which may or may not be equal to her true subjective probability, say, π = Pr(X = 1). A scoring rule is defined to be strictly proper if it is incentive-compatible for a risk-neutral agent, i.e. an agent maximizes her subjective expectation of her ex post compensation by reporting her true subjective probability: π = arg maxp∈[0,1] [πs1(p) + (1 −π)s0(p)], ∀π ∈[0, 1]. In addition, a two-outcome scoring rule is regular if sj(·) is real-valued except possibly that s0(1) or s1(0) is −∞; any regular strictly proper scoring rule can written in the following form (Gneiting and Raftery [2007]): sj(p) = G(p) + G′(p)(j −p), j ∈{0, 1}, p ∈[0, 1], (1) G : [0, 1] →R is a strictly convex function with G′(·) as a sub-gradient which is real-valued expect possibly that −G′(0) or G′(1) is ∞; if G(·) is differentiable in (0, 1), G′(·) is simply its derivative. A classic example of a regular strictly proper scoring rule is the logarithmic scoring rule: s1(p) = b ln p; s0(p) = b ln(1 −p), where b > 0 is a free parameter. (2) Hanson [2003] introduced an extension of a scoring rule wherein the principal initiates the process of information elicitation by making a baseline report p0, and then elicits publicly declared reports pi sequentially from n agents; the ex post compensation cx(pi, pi−1) received by agent i from the principal, where x is the realized outcome of event X, is the difference between the scores assigned to the reports made by herself and her predecessor: cx(pi, pi−1) ≜sx(pi) −sx(pi−1), x ∈{0, 1}. (3) If each agent acts non-collusively, risk-neutrally, and myopically (as if her current interaction with the principal is her last), then the incentive compatibility property of a strictly proper score still holds for the sequential version. Moreover, it is easy to show that the principal’s worst-case payout (loss) is bounded regardless of agent behavior. In particular, for the binary-outcome logarithmic score, the loss bound for p0 = 1/2 is b ln 2; b can be referred to as the principal’s loss parameter. A sequentially shared strictly proper scoring rule of the above form can also be interpreted as a cost function-based prediction market mechanism offering trade in an Arrow-Debreu (i.e. (0, 1)valued) security written on the event X, hence the name “market scoring rule”. The cost function is a strictly convex function of the total outstanding quantity of the security that determines all execution costs; its first derivative (the cost per share of buying or the proceeds per share from selling an infinitesimal quantity of the security) is called the market’s “instantaneous price”, and can be interpreted as the market maker’s current risk-neutral probability (Chen and Pennock [2007]) for {X = 1}, the starting price being equal to the principal’s baseline report p0. Trading occurs in discrete episodes 1, 2, . . . , n, in each of which an agent orders a quantity of the security to buy or sell given the market’s cost function and the (publicly displayed) instantaneous price. Since there is a one-to-one correspondence between agent i’s order size and pi, the market’s revised instantaneous price after trading with agent i, an agent’s “action” or trading decision in this setting is identical to making a probability report by selecting a pi ∈[0, 1]. If agent i is risk-neutral, then pi is, by design, her subjective probability πi (see Hanson [2003], Chen and Pennock [2007] for further details). 4 Definition 2. We call a market scoring rule well-behaved if the underlying scoring rule is regular and strictly proper, and the associated convex function G(·) (as in (1)) is continuous and thricedifferentiable, with 0 < G′′(p) < ∞and |G′′′(p)| < ∞for 0 < p < 1. 3 MSR behavior with risk-averse myopic agents We first present general results on the connection between sequential trading in a MSR-mediated market with risk-averse agents and opinion pooling, and then give a more detailed picture for two representative utility functions without and with budget constraints respectively. Please refer to Section 2 of the Supplementary Material for detailed proofs of all results in this section. Suppose that, in addition to a belief πi = Pr(X = 1), each agent i has a continuous utility function of wealth ui(c), where c ∈[cmin i , ∞] denotes her (ex post) wealth, i.e. her net compensation from the market mechanism after the realization of X defined in (3), and cmin i ∈[−∞, 0] is her minimum acceptable wealth (a negative value suggests tolerance of debt); ui(·) satisfies the usual criteria of non-satiation i.e. u′ i(c) > 0 except possibly that u′ i(∞) = 0, and risk aversion, i.e. u′′ i (c) < 0 except possibly that u′′ i (∞) = 0, through out its domain (Mas-Colell et al. [1995]); in other words ui(·) is strictly increasing and strictly concave. Additionally, we require its first two derivatives to be finite and continuous on [cmin i , ∞] except that we tolerate u′ i(cmin i ) = ∞, u′′ i (cmin i ) = −∞. Note that, by choosing a finite lower bound cmin i on the agent’s wealth, we can account for any starting wealth or budget constraint that effectively restricts the agent’s action space. Lemma 2. If |cmin i | < ∞, then there exist lower and upper bounds, pmin i ∈[0, pi−1] and pmax i ∈ [pi−1, 1] respectively, on the feasible values of the price pi to which agent i can drive the market regardless of her belief πi, where pmin i = s−1 1 (cmin i + s1(pi−1)) and pmax i = s−1 0 (cmin i + s0(pi−1)). Since the latest price pi−1 can be viewed as the market’s current “state” from myopic agent i’s perspective, the agent’s final utility depends not only on her own action pi and the extraneously determined outcome x but also on the current market state pi−1 she encounters, her rational action being given by pi = arg maxp∈[0,1] [πiui(c1(p, pi−1)) + (1 −πi)ui(c0(pi, pi−1))]. This leads us to the main result of this section. Theorem 1. If a well-behaved market scoring rule for an Arrow-Debreu security with a starting instantaneous price p0 ∈(0, 1) trades with a sequence of n myopic agents with subjective probabilities π1, . . . , πn ∈(0, 1) and risk-averse utility functions of wealth u1(·), . . . , un(·) as above, then the updated market price pi after every trading episode i ∈{1, 2, . . . , n} is equivalent to a valid opinion pool for the market’s initial baseline report p0 and the subjective probabilities π1, π2, . . . , πi of all agents who have traded up to (and including) that episode. Proof sketch. For every trading epsiode i, by setting the first derivative of agent i’s expected utility to zero, and analyzing the resulting equation, we can arrive at the following lemmas. Lemma 3. Under the conditions of Theorem 1, if pi−1 ∈(0, 1), then the revised price pi after agent i trades is the unique solution in (0, 1) to the fixed-point equation: pi = πiu′ i(c1(pi, pi−1)) πiu′ i(c1(pi, pi−1)) + (1 −πi)u′ i(c0(pi, pi−1)). (4) Since p0 ∈(0, 1), and πi ∈(0, 1) ∀i, pi is also confined to (0, 1) ∀i, by induction. Lemma 4. The implicit function pi(pi−1, πi) described by (4) has the following properties: 1. pi = πi (or pi−1) if and only if πi = pi−1. 2. 0 < min{pi−1, πi} < pi < max{pi−1, πi} < 1 whenever πi ̸= pi−1, 0 < πi, pi−1 < 1. 3. For any given pi−1 (resp. πi), pi is a strictly increasing function of πi (resp. pi−1). Evidently, properties 1, 2, and 3 above correspond to axioms of unanimity, boundedness, and monotonicity respectively, defined in Section 2. Hence, pi(pi−1, πi) is a valid opinion pooling function for pi−1, πi. Finally, since (4) defines the opinion pool pi recursively in terms of pi−1 ∀i = 1, 2, . . . , n, we can invoke Lemma 1 to obtain the desired result. □ 5 There are several points worth noting about this result. First, since the updated market price pi is also equivalent to agent i’s action (Section 2.2), the R.H.S. of (4) is agent i’s risk-neutral probability (Pennock [1999]) of {X = 1}, given her utility function, her action, and the current market state. Thus, Lemma 3 is a natural extension of the elicitation properties of a MSR. MSRs, by design, elicit subjective probabilities from risk-neutral agents in an incentive compatible manner; we show that, in general, they elicit risk-neutral probabilities when they interact with risk-averse agents. Lemma 3 is also consistent with the observation of Pennock [1999] that, for all belief elicitation schemes based on monetary incentives, an external observer can only assess a participant’s risk-neutral probability uniquely; she cannot discern the participant’s belief and utility separately. Second, observe that this pooling operation is accomplished by a MSR even without direct revelation. Finally, notice the presence of the market maker’s own initial baseline p0 as a component in the final aggregate; however, for the examples we study below, the impact of p0 diminishes with the participation of more and more informed agents, and we conjecture that this is a generic property. In general, the exact form of this pooling function is determined by the complex interaction between the MSR and agent utility, and a closed form of pi from (4) might not be attainable in many cases. However, given a paticular MSR, we can venture to identify agent utility functions which give rise to well-known opinion pools. Hence, for the rest of this paper, we focus on the logarithmic market scoring rule (LMSR), one of the most popular tools for implementing real-world prediction markets. 3.1 LMSR as LogOP for constant absolute risk aversion (CARA) utility Theorem 2. If myopic agent i, having a subjective belief πi ∈(0, 1) and a risk-averse utility function satisfying our criteria, trades with a LMSR market with parameter b and current instantaneous price pi−1, then the market’s updated price pi is identical to a logarithmic opinion pool between the current price and the agent’s subjective belief, i.e. pi = παi i p1−αi i−1 παi i p1−αi i−1 + (1 −πi)αi(1 −pi−1)1−αi , αi ∈(0, 1), (5) if and only if agent i’s utility function is of the form ui(c) = τi (1 −exp (−c/τi)) , c ∈R ∪{−∞, ∞}, constant τi ∈(0, ∞), (6) the aggregation weight being given by αi = τi/b 1+τi/b. The proof is in Section 2.1 of the Supplementary Material. Note that (6) is a standard formulation of the CARA (or negative exponential) utility function with risk tolerance τi; smaller the value of τi, higher is agent i’s aversion to risk. The unbounded domain of ui(·) indicates a lack of budget constraints; risk aversion comes about from the fact that the range of the function is bounded above (by its risk tolerance τi) but not bounded below. Moreover, the LogOP equation (5) can alternatively be expressed as a linear update in terms of log-odds ratios, another popular means of formulating one’s belief about a binary event: l(pi) = αil(πi) + (1 −αi)l(pi−1), l(p) = ln p 1−p ∈[−∞, ∞] for p ∈[0, 1]. (7) Aggregation weight and risk tolerance: Since αi is an increasing function of an agent’s risk tolerance relative to the market’s loss parameter (the latter being, in a way, a measure of how much risk the market maker is willing to take), identity (7) implies that the higher an agent’s risk tolerance, the larger is the contribution of her belief towards the changed market price, which agrees with intuition. Also note the interesting manner in which the market’s loss parameter effectively scales down an agent’s risk tolerance, enhancing the inertia factor (1 −αi) of the price process. Bayesian interpretation: The Bayesian interpretation of LogOP in general is well-known (Bordley [1982]); we restate it here in a form that is more appropriate for our prediction market setting. We can recast (5) as pi = pi−1 πi pi−1 αi h pi−1 πi pi−1 αi + (1 −pi−1) 1−πi 1−pi−1 αii . This shows that, over the ith trading episode ∀i, the LMSR-CARA agent market environment is equivalent to a Bayesian learner performing inference on the point estimate of the probability of the forecast event X, starting with the common-knowledge prior Pr(X = 1) = pi−1, and having direct access to πi (which corresponds to the “observation” for the inference problem), the likelihood function associated with this observation being L (X = x|πi) ∝ 1−x−πi 1−x−pi−1 αi , x ∈{0, 1}. 6 Sequence of one-shot traders: If all n agents in the system have CARA utilities with potentially different risk tolerances, and trade with LMSR myopically only once each in the order 1, . . . , n, then the “final” market log-odds ratio after these n trades, on unfolding the recursion in (7), is given by l(pn) = eαn 0l(p0)+Pn i=1 eαn i l(πi). This is a LogOP where eαn 0 = Qn i=1(1−αi) determines the inertia of the market’s initial price, which diminishes as more and more traders interact with the market, and eαn j , j ≥1 quantifies the degree to which an individual trader impacts the final (aggregate) market belief; eαn j = αj Qn i=j+1 (1 −αi), j = 1, . . . , n −1, and eαn n = αn. Interestingly, the weight of an agent’s belief depends not only on her own risk tolerance but also on those of all agents succeeding her in the trading sequence (lower weight for a more risk tolerant successor, ceteris paribus), and is independent of her predecessors’ utility parameters. This is sensible since, by the design of a MSR, trader i’s belief-dependent action influences the action of each of (rational) traders i + 1, i + 2, . . . so that the action of each of these successors, in turn, has a role to play in determining the market impact of trader i’s belief. In particular, if τj = τ > 0 ∀j ≥1, then the aggregation weights satisfy the inequalities eαn j+1/eαn j = 1+τ/b > 1 ∀j = 1, · · · , n−1, i.e. LMSR assigns progessively higher weights to traders arriving later in the market’s lifetime when they all exhibit identical constant risk aversion. This seems to be a reasonable aggregation principle in most scenarios wherein the amount of information in the world improves over time. Moreover, in this situation, eαn 1/eαn 0 = τ/b which indicates that the weight of the market’s baseline belief in the aggregate may be higher than those of some of the trading agents if the market maker has a comparatively high loss parameter. This strong effect of the trading sequence on the weights of agents’ beliefs is a significant difference between the one-shot trader setting and the market equilibrium setting where each agent’s weight is independent of the utility function parameters of her peers. Convergence: If agents’ beliefs are themselves independent samples from the same distribution P over [0, 1], i.e. πi ∼i.i.d. P ∀i, then by the sum laws of expectation and variance, E [l(pn)] = eαn 0l(p0) + (1 −eαn 0)Eπ∼P [l(π)] ; Var [l(pn)] = Varπ∼P [l(π)] Pn i=1(eαn i )2. Hence, using an appropriate concentration inequality (Boucheron et al. [2004]) and the properties of the eαn i ’s, we can show that, as n increases, the market log-odds ratio l(pn) converges to Eπ∼P [l(π)] with a high probability; this convergence guarantee does not require the agents to be Bayesian. 3.2 LMSR as LinOP for an atypical utility with decreasing absolute risk aversion Theorem 3. If myopic agent i, having a subjective belief πi ∈(0, 1) and a risk-averse utility function satisfying our criteria, trades with a LMSR market with parameter b and current instantaneous price pi−1, then the market’s updated price pi is identical to a linear opinion pool between the current price and the agent’s subjective belief, i.e. pi = βiπi + (1 −βi)pi−1, for some constant βi ∈(0, 1), (8) if and only if agent i’s utility function is of the form ui(c) = ln(exp((c + Bi)/b) −1), c ≥−Bi, (9) where Bi > 0 represents agent i’s budget, the aggregation weight being βi = 1 −exp(−Bi/b). The proof is in Section 2.2 of the Supplementary Material. The above atypical utility function has its domain bounded below, and possesses a positive but strictly decreasing Arrow-Pratt absolute risk aversion measure (Mas-Colell et al. [1995]) Ai(c) = −u′′ i (c)/u′ i(c) = 1 b(exp((c+Bi)/b)−1) for any b, Bi > 0. It shares these characteristics with the well-known logarithmic utility function. Moreover, although this function is approximately linear for large (positive) values of the wealth c, it is approximately logarithmic when (c + Bi) ≪b. Theorem 3 is somewhat surprising since it is logarithmic utility that has traditionally been found to effect a LinOP in a market equilibrium (Pennock [1999], Beygelzimer et al. [2012], Storkey et al. [2015], etc.). Of course in this paper, we are not in an equilibrium / convergence setting, but in light of the above similarities between utility function (9) and logarithmic utility, it is perhaps not unreasonable to ask whether the logarithmic utility-LinOP connection is still maintained approximately for LMSR price evolution under some conditions. We have extensively explored this idea, both analytically and by simulations, and have found that a small agent budget compared to the LMSR loss parameter b seems to produce the desired result (see Section 3 of the Supplementary Material). 7 Note that, unlike in Theorem 2, the equivalence here requires the agent utility function to depend on the market maker’s loss parameter b (the scaling factor in the exponential). Since the microstructure is assumed to be common knowledge, as in traditional MSR settings, the consideration of an agent utility that takes into account the market’s pricing function is not unreasonable. Since the domain of utility function (9) is bounded below, we can derive πi-independent bounds on possible values of pi from Lemma 2: pmin i = (1 −βi)pi−1, pmax i = βi + (1 −βi)pi−1. Hence, equation (8) becomes pi = πipmax i + (1 −πi)pmin i , i.e. the revised price is a linear interpolation between the agent’s price bounds, her subjective probability itself acting as the interpolation factor. Aggregation weight and budget constraint: Evidently, the aggregation weight of agent i’s belief, βi = (1 −exp(−Bi/b)), is an increasing function of her budget normalized with respect to the market’s loss parameter; it is, in a way, a measure of her relative risk tolerance. Thus, broad characteristics analogous to the ones in Section 3.1 apply to these aggregation weights as well, with the log-odds ratio replaced by the actual market price. Bayesian interpretation: Under the mild technical assumption that agent i’s belief πi ∈(0, 1) is rational, and her budget Bi > 0 is such that βi ∈(0, 1) is also rational, it is possible to obtain positive integers ri, Ni and a positive rational number mi−1 such that πi = ri/Ni and βi = Ni/(mi−1+Ni). Then, we can rewrite the LinOP equation (8) as pi = ri+pi−1mi−1 mi−1+Ni , which is equivalent to the posterior expectation of a beta-binomial Bayesian inference procedure described as follows: The forecast event X is modeled as the (future) final flip of a biased coin with an unknown probability of heads. In episode i, the principal (or aggregator) has a prior distribution BETA(µi−1, νi−1) over this probability, with µi−1 = pi−1mi−1, νi−1 = (1 −pi−1)mi−1. Thus, pi−1 is the prior mean and mi−1 the corresponding “pseudo-sample size” parameter. Agent i is non-Bayesian, and her subjective probability πi, accessible to the aggregator, is her maximum likelihood estimate associated with the (binomial) likelihood of observing ri heads out of a private sample of Ni independent flips of the above coin (Ni is common knowledge). Note that mi−1, Ni are measures of certainty of the aggregator and the trading agent respectively, and the latter’s normalized budget Bi/b = ln(1+Ni/mi−1) becomes a measure of her certainty relative to the aggregator’s current state in this interpretation. Sequence of one-shot traders and convergence: If all agents have utility (9) with potentially different budgets, and trade with LMSR myopically once each, then the final aggregate market price is given by pn = eβn 0 p0 + Pn i=1 eβn i πi, which is a LinOP where eβn 0 = Qn i=1(1 −αi), eβn j = βj Qn i=j+1 (1 −βi) ∀j = 1, . . . , n −1, eβn n = βn. Again, all intuitions about eαn j from Section 3.1 carry over to eβn j . Moreover, if πi ∼i.i.d. P ∀i, then we can proceed exactly as in Section 3.1 to show that, as n increases, pn converges to Eπ∼P [π] with a high probability. 4 Discussion and future work We have established the correspondence of a well-known securities market microstructure to a class of traditional belief aggregation methods and, by extension, Bayesian inference procedures in two important cases. An obvious next step is the identification of general conditions under which a MSR and agent utility combination is equivalent to a given pooling operation. Another research direction is extending our results to a sequence of agents who trade repeatedly until “convergence”, taking into account issues such as the order in which agents trade when they return, the effects of the updated wealth after the first trade for agents with budgets, etc. Acknowledgments We are grateful for support from NSF IIS awards 1414452 and 1527037. References Jacob Abernethy, Sindhu Kutty, S´ebastien Lahaie, and Rahul Sami. Information aggregation in exponential family markets. In Proc. ACM Conference on Economics and Computation, pages 395–412. ACM, 2014. 8 Alina Beygelzimer, John Langford, and David M Pennock. Learning performance of prediction markets with Kelly bettors. In Proc. AAMAS, pages 1317–1318, 2012. Robert F. Bordley. A multiplicative formula for aggregating probability assessments. Management Science, 28(10):1137–1148, 1982. St´ephane Boucheron, G´abor Lugosi, and Olivier Bousquet. Concentration inequalities. In Advanced Lectures on Machine Learning, pages 208–240. Springer, 2004. Aseem Brahma, Mithun Chakraborty, Sanmay Das, Allen Lavoie, and Malik Magdon-Ismail. A Bayesian market maker. In Proc. ACM Conference on Electronic Commerce, pages 215–232. ACM, 2012. Yiling Chen and David M. Pennock. A utility framework for bounded-loss market makers. In Proc. UAI-07, 2007. Yiling Chen and Jennifer Wortman Vaughan. A new understanding of prediction markets via noregret learning. In Proc. ACM Conference on Electronic Commerce, pages 189–198. ACM, 2010. Dave Cliff and Janet Bruten. Zero is not enough: On the lower limit of agent intelligence for continuous double auction markets. Technical report, HPL-97-141, Hewlett-Packard Laboratories Bristol. 105, 1997. Bo Cowgill and Eric Zitzewitz. Corporate prediction markets: Evidence from Google, Ford, and Koch industries. Technical report, Working paper, 2013. J. Doyne Farmer, Paolo Patelli, and Ilija I. Zovko. The predictive power of zero intelligence in financial markets. PNAS, 102(6):2254–2259, 2005. Rafael M. Frongillo, Nicolas D. Penna, and Mark D. Reid. Interpreting prediction markets: A stochastic approach. In Proc. NIPS, pages 3266–3274, 2012. Ashutosh Garg, T.S. Jayram, Shivakumar Vaithyanathan, and Huaiyu Zhu. Generalized opinion pooling. In Proc. 8th Intl. Symp. on Artificial Intelligence and Mathematics. Citeseer, 2004. Christian Genest and James V. Zidek. Combining probability distributions: A critique and an annotated bibliography. Statistical Science, pages 114–135, 1986. Tilmann Gneiting and Adrian E. Raftery. Strictly proper scoring rules, prediction, and estimation. Journal of the American Statistical Association, 102(477):359–378, 2007. Robin D. Hanson. Combinatorial information market design. Information Systems Frontiers, 5(1): 107–119, 2003. Jinli Hu and Amos J. Storkey. Multi-period trading prediction markets with connections to machine learning. Proc. ICML, 2014. Krishnamurthy Iyer, Ramesh Johari, and Ciamac C. Moallemi. Information aggregation and allocative efficiency in smooth markets. Management Science, 60(10):2509–2524, 2014. Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green. Microeconomic theory, volume 1. New York: Oxford University Press, 1995. Jono Millin, Krzysztof Geras, and Amos J. Storkey. Isoelastic agents and wealth updates in machine learning markets. In Proc. ICML, pages 1815–1822, 2012. Michael Ostrovsky. Information aggregation in dynamic markets with strategic traders. Econometrica, 80(6):2595–2647, 2012. David M. Pennock. Aggregating probabilistic beliefs: Market mechanisms and graphical representations. PhD thesis, The University of Michigan, 1999. David M. Pennock and Lirong Xia. Price updating in combinatorial prediction markets with Bayesian networks. In Proc. UAI, pages 581–588, 2011. Rajiv Sethi and Jennifer Wortman Vaughan. Belief aggregation with automated market makers. Computational Economics, Forthcoming, 2015. Available at SSRN 2288670. Amos J Storkey, Zhanxing Zhu, and Jinli Hu. Aggregation under bias: R´enyi divergence aggregation and its implementation via machine learning markets. In Machine Learning and Knowledge Discovery in Databases, pages 560–574. Springer, 2015. James Surowiecki. The wisdom of crowds. Anchor, 2005. Justin Wolfers and Eric Zitzewitz. Prediction markets. J. Econ. Perspectives, 18(2):107–126, 2004. 9 | 2015 | 73 |
5,965 | Lifted Inference Rules with Constraints Happy Mittal, Anuj Mahajan Dept. of Comp. Sci. & Engg. I.I.T. Delhi, Hauz Khas New Delhi, 110016, India happy.mittal@cse.iitd.ac.in, anujmahajan.iitd@gmail.com Vibhav Gogate Dept. of Comp. Sci. Univ. of Texas Dallas Richardson, TX 75080, USA vgogate@hlt.utdallas.edu Parag Singla Dept. of Comp. Sci. & Engg. I.I.T. Delhi, Hauz Khas New Delhi, 110016, India parags@cse.iitd.ac.in Abstract Lifted inference rules exploit symmetries for fast reasoning in statistical relational models. Computational complexity of these rules is highly dependent on the choice of the constraint language they operate on and therefore coming up with the right kind of representation is critical to the success of lifted inference. In this paper, we propose a new constraint language, called setineq, which allows subset, equality and inequality constraints, to represent substitutions over the variables in the theory. Our constraint formulation is strictly more expressive than existing representations, yet easy to operate on. We reformulate the three main lifting rules: decomposer, generalized binomial and the recently proposed single occurrence for MAP inference, to work with our constraint representation. Experiments on benchmark MLNs for exact and sampling based inference demonstrate the effectiveness of our approach over several other existing techniques. 1 Introduction Statistical relational models such as Markov logic [5] have the power to represent the rich relational structure as well as the underlying uncertainty, both of which are the characteristics of several real world application domains. Inference in these models can be carried out using existing probabilistic inference techniques over the propositionalized theory (e.g., Belief propagation, MCMC sampling, etc.). This approach can be sub-optimal since it ignores the rich underlying structure in the relational representation, and as a result does not scale to even moderately sized domains in practice. Lifted inference ameliorates the aforementioned problems by identifying indistinguishable atoms, grouping them together and inferring directly over the groups instead of individual atoms. Starting with the work of Poole [21], a number of lifted inference algorithms have been proposed. These include lifted exact inference techniques such as lifted Variable Elimination (VE) [3, 17], lifted approximate inference algorithms based on message passing such as belief propagation [23, 14, 24], lifted sampling based algorithms [26, 12], lifted search [11], lifted variational inference [2, 20] and lifted knowledge compilation [10, 6, 9]. There also has been some recent work which examines the complexity of lifted inference independent of the specific algorithm used [13, 2, 8]. Just as probabilistic inference algorithms use various rules such as sum-out, conditioning and decomposition to exploit the problem structure, lifted inference algorithms use lifted inference rules to exploit the symmetries. All of them work with an underlying constraint representation that specifies the allowed set of substitutions over variables appearing in the theory. Examples of various constraint representations include weighted parfactors with constraints [3], normal form parfactors [17], hypercube based representations [24], tree based constraints [25] and the constraint free normal form [13]. These formalisms differ from each other not only in terms of the underlying constraint representation but also how these constraints are processed e.g., whether they require a constraint solver, splitting as needed versus shattering [15], etc. The choice of the underlying constraint language can have a significant impact on the time as well as memory complexity of the inference procedure [15], and coming up with the right kind of constraint representation is of prime importance for the success of lifted inference techniques. Although, there 1 Approach Constraint Constraint Tractable Lifting Type Aggregation Solver Algorithm Lifted VE [4] eq/ineq intersection no lifted VE no subset no union CFOVE [17] eq/ineq intersection yes lifted VE no subset no union GCFOVE [25] subset (tree-based) intersection yes lifted VE no inequality union Approx. LBP [24] subset (hypercube) intersection yes lifted message passing no inequality union Knowledge Compilation eq/ineq intersection no first-order knowledge (KC) [10, 7] subset no union compilation Lifted Inference normal forms none yes lifting rules: from Other Side [13] (no constraints) decomposer,binomial PTP [11] eq/ineq intersection no lifted search & sampling: no subset no union decomposer, binomial Current Work eq/ineq intersection yes lifted search & sampling: subset union decomposer,binomial single occurrence Table 1: A comparison of constraint languages proposed in literature across four dimensions. The deficiencies/missing properties for each language have been highlighted in bold. Among the existing work, only KC allows for a full set of constraints. GCFOVE (tree-based) and LBP (hypercubes) allow for subset constraints but they do not explicitly handle inequality. PTP does not handle subset constraints. For constraint aggregation, most approaches allow only intersection of atomic constraints. GCFOVE and LBP allow union of intersections (DNF) but only deal with subset constraints. See footnote 4 in Broeck [7] regarding KC. Lifted VE, KC and PTP use a general purpose constraint solver which may not be tractable. Our approach allows for all the features discussed above and uses a tractable solver. We propose a constrained solution for lifted search and sampling. Among earlier work, only PTP has looked at this problem (both search and sampling). However, it only allows a very restrictive set of constraints. has been some work studying this problem in the context of lifted VE [25], lifted BP [24], and lifted knowledge compilation [10], existing literature lacks any systematic treatment of this issue in the context of lifted search and sampling based algorithms. This paper focuses on addressing this issue. Table 1 presents a detailed comparison of various constraint languages for lifted inference to date. We make the following contributions. First, we propose a new constraint language called setineq, which allows for subset (i.e., allowed values are constrained to be either inside a subset or outside a subset), equality and inequality constraints (called atomic constraints) over substitutions of the variables. The set of allowed constraints is expressed as a union over individual constraint tuples, which in turn are conjunctions over atomic constraints. Our constraint language strictly subsumes several of the existing constraint representations and yet allows for efficient constraint processing, and more importantly does not require a separate constraint solver. Second, we extend the three main lifted inference rules: decomposer and binomial [13], and single occurrence [18] for MAP inference, to work with our proposed constraint language. We provide a detailed analysis of the lifted inference rules in our constraint formalism and formally prove that the normal form representation is strictly subsumed by our constraint formalism. Third, we show that evidence can be efficiently represented in our constraint formulation and is a key benefit of our approach. Specifically, based on the earlier work of Singla et al. [24], we provide an efficient (greedy) approach to convert the given evidence in the database tuple form to our constraint representation. Finally, we demonstrate experimentally that our new approach is superior to normal forms as well as many other existing approaches on several benchmark MLNs for both exact and approximate inference. 2 Markov Logic We will use a strict subset of first order logic [22], which is composed of constant, variable, and predicate symbols. A term is a variable or a constant. A predicate represents a property of or relation between terms, and takes a finite number of terms as arguments. A literal is a predicate or its negation. A formula is recursively defined as follows: (1) a literal is a formula, (2) negation of a formula is a formula, (3) if f1 and f2 are formulas then applying binary logical operators such as ∧and ∨to f1 and f2 yields a formula and (4) If x is a variable in a formula f, then ∃x f and ∀x f are formulas. A first order theory (knowledge base (KB)) is a set of quantified formulas. We will restrict our attention to function-free finite first order logic theory with Herbrand interpretations [22], as done by most earlier work in this domain [5]. We will also restrict our 2 attention to the case of universally quantified variables. A ground atom is a predicate whose terms do not contain any variable in them. Similarly, a ground formula is a formula that has no variables. During the grounding of a theory, each formula is replaced by a conjunction over ground formulas obtained by substituting the universally quantified variables by constants appearing in the theory. A Markov logic network (MLN) [5] (or a Markov logic theory) is defined as a set of pairs {fi, wi}m i=1 where fi is a first-order formula and wi is its weight, a real number. Given a finite set of constants C, a Markov logic theory represents a Markov network that has one node for every ground atom in the theory and a feature for every ground formula. The probability distribution represented by the Markov network is given by P(θ) = 1 Z exp(Pm i=1 wini(θ)), where ni(θ) denotes the number of true groundings of the ith formula under the assignment θ to the ground atoms (world) and Z = P θ′ exp(Pm i=1 wi ∗ni(θ′))) is the normalization constant, called the partition function. It is well known that prototypical marginal inference task in MLNs – computing the marginal probability of a ground atom given evidence – can be reduced to computing the partition function [11]. Another key inference task is MAP inference in which the goal is to find an assignment to ground atoms that has the maximum probability. In its standard form, a Markov logic theory is assumed to be constraint free i.e. all possible substitutions of variables by constants are considered during the grounding process. In this paper, we introduce the notion of a constrained Markov logic theory which is specified as a set of triplets {fi, wi, Sx i }m i=1 where Sx i specifies a set (union) of constraints defined over the variables x appearing in the formula. During the grounding process, we restrict to those constant substitutions which satisfy the constraint set associated with a formula. The probability distribution is now defined using the restricted set of groundings allowed by the respective constraint sets over the formulas in the theory. Although, we focus on MLNs in this paper, our results can be easily generalized to other representations including weighted parfactors [3] and probabilistic knowledge bases [11]. 3 Constraint Language In this section, we formally define our constraint language and its canonical form. We also define two operators, join and project, for our language. The various features, operators, and properties of the constraint language presented this section will be useful when we formally extend various lifted inference rules to the constrained Markov logic theory in the next section (sec. 4). Language Specification. For simplicity of exposition, we assume that all logical variables take values from the same domain C. Let x = {x1, x2, . . . , xk} be a set of logical variables. Our constraint language called setineq contains three types of atomic constraints: (1) Subset Constraints (setct), of the form xi ∈C (setinct), or xi /∈C (setoutct); (2) equality constraints (eqct), of the form xi = xj; and (3) inequality constraints (ineqct), of the form xi ̸= xj. We will denote an atomic constraint over set x by Ax. A constraint tuple over x, denoted by T x, is a conjunction of atomic constraints over x, and a constraint set over x, denoted by Sx, is a disjunction of constraint tuples over x. An example of a constraint set over a pair of variables x = {x1, x2} is Sx = T x 1 ∨T x 2 , where T x 1 = [x1 ∈{A, B}∧x1 ̸= x2∧x2 ∈{B, D}], and T x 2 = [x1 /∈{A, B}∧x1 = x2∧x2 ∈{B, D}]. An assignment v to the variables in x is a solution of T x if all constraints in T x are satisfied by v. Since Sx is a disjunction, by definition, v is also a solution of Sx. Next, we define a canonical representation for our constraint language. We require this definition because symmetries can be easily identified when constraints are expressed in this representation. We begin with some required definitions. The support of a subset constraint is the set of values in C that satisfies the constraint. Two subset constraints Ax1 and Ax2 are called value identical if V1 = V2, and value disjoint if V1 ∩V2 = φ, where V1 and V2 are supports of Ax1 and Ax2 respectively. A constraint tuple T x is transitive over equality if it contains the transitive closure of all its equality constraints. A constraint tuple T x is transitive over inequality if for every constraint of the form xi = xj in T x, whenever T x contains xi ̸= xk, it also contains xj ̸= xk. Definition 3.1. A constraint tuple T x is in canonical form if the following three conditions are satisfied: (1) for each variable xi ∈x, there is exactly one subset constraint in T x, (2) all equality and inequality constraints in T x are transitive and (3) all pairs of variables x1, x2 that participate either in an equality or an inequality constraint have identical supports. A constraint set Sx is in canonical form if all of its constituent constraint tuples are in canonical form. 3 We can easily express a constraint set in an equivalent canonical form by enforcing the three conditions, one by one on each of its tuples. In our running example, T x 1 can be converted into canonical form by splitting it into four sets of constraint tuples {T x 11, T x 12, T x 13, T x 14}, where T x 11 = [x1 ∈ {B} ∧x1 ̸= x2 ∧x2 ∈{B}], T x 12 = [x1 ∈{B} ∧x2 ∈{D}], T x 13 = [x1 ∈{A} ∧x2 ∈{B}], and T x 14 = [x1 ∈{A} ∧x2 ∈{D}]. Similarly for T x 2 . We include the conversion algorithm in the supplement due to lack of space. The following theorem summarizes its time complexity. Theorem 3.1.* Given a constraint set Sx, each constraint tuple T x in it can be converted to canonical form in time O(mk + k3) where m is the total number of constants appearing in any of the subset constraints in T x and k is the number of variables in x. We define following two operations in our constraint language. Join: Join operation lets us combine a set of constraints (possibly defined over different sets of variables) into a single constraint. It will be useful when constructing formulas given constrained predicates (refer Section 4). Let T x and T y be constraints tuples over sets of variables x and y, respectively, and let z = x ∪y. The join operation written as T x ⋊⋉T y results in a constraint tuple T z which has the conjunction of all the constraints present in T x and T y. Given the constraint tuple T x 1 in our running example and T y = [x1 ̸= y ∧y ∈{E, F}], T x 1 ⋊⋉T y results in [x1 ∈ {A, B} ∧x1 ̸= x2 ∧x1 ̸= y ∧x2 ∈{B, D} ∧y ∈{E, F}]. The complexity of join operation is linear in the size of constraint tuples being joined. Project: Project operation lets us eliminate a variable from a given constraint tuple. This is key operation required in the application of Binomial rule (refer Section 4). Let T x be a constraint tuple. Given xi ∈x, let ¯xi = x \ {xi}. The project operation written as Π ¯xiT x results in a constraint tuple T ¯ xi which contains those constraints in T x not involving xi. We refer to T ¯ xi as the projected constraint for the variables ¯xi. Given a solution ¯xi = ¯vi to T ¯xi, the extension count for ¯vi is defined as the number of unique assignments xi = vi such that ¯xi = ¯vi,xi = vi is a solution for T x. T ¯ xi is said to be count preserving if each of its solutions has the same extension count. We require a tuple to be count preserving in order to correctly maintain the count of the number of solutions during the project operation (also refer Section 4.3). Lemma 3.1. * Let T x be a constraint tuple in its canonical form. If xi ∈x is a variable which is either involved only in a subset constraint or is involved in at least one equality constraint then, the projected constraint T ¯ xi is count preserving. In the former case, the extension count is given by the size of the support of xi. In the latter case, it is equal to 1. When dealing with inequality constraints, the extension count for each solution ¯vi to the projected constraint T ¯ xi may not be the same and we need to split the constraint first in order to apply the project operation. For example, consider the constraint [x1 ̸= x2 ∧x1 ̸= x3 ∧x1, x2, x3 ∈ {A, B, C}]. Then, the extension count for the solution x2 = A, x3 = B to the projected constraint T ¯ x1 is 1 where extension count for the solution x2 = x3 = A is 2. In such cases, we need to split the tuple T x into multiple constraints such that extension count property is preserved in each split. Let ¯xi be a set of variables over which a constraint tuple T x needs to be projected. Let y ⊂x be the set of variables with which xi is involved in an inequality constraint in T x. Then, tuple T x can be broken into an equivalent constraint set by considering each possible division of y into a set of equivalence classes where variables in the same equivalence class are constrained to be equal and variables in different equivalence classes are constrained to be not equal to each other. The number of such divisions is given by the Bell number [15]. The divisions inconsistent with the already existing constraints over variables in y can be ignored. Projection operation has a linear time complexity once the extension count property has been ensured using splitting as described above (see the supplement for details). 4 Extending Lifted Inference Rules We extend three key lifted inference rules: decomposer [13], binomial [13] and the single occurrence [18] (for MAP) to work with our constraint formulation. Exposition for Single Occurrence has been moved to supplement due to lack of space. We begin by describing some important definitions and assumptions. Let M be a constrained MLN theory represented by a set of triplets {(fi, wi, Sx i )}m i=1. We make three assumptions. First, we assume that each constraint set Sx i is specified using setineq and is in canonical form. Second, we assume that each formula in the MLN is constant free. This can be achieved by replacing the appearance of a constant by a variable and introducing appropriate constraint over the new variable (e.g., replacing A by a variable x and a 4 constraint x ∈{A}). Third, we assume that the variables have been standardized apart, i.e., each formula has a unique set of variables associated with it. In the following, x will denote the set of all the (logical) variables appearing in M. xi will denote the set of variables in fi. Similar to the work done earlier [13, 18], we divide the variables in a set of equivalence classes. Two variables are Tied to each other if they appear as the same argument of a predicate. We take the transitive closure of the Tied relation to obtain the variable equivalence classes. For example, given the theory: P(x) ⇒Q(x, y); Q(u, v) ⇒R(v); R(w) ⇒T(w, z), the variable equivalence classes are {x, u}, {y, v, w} and {z}. We will use the notation ˆx to denote the equivalence class to which x belongs. 4.1 Motivation and Key Operations The key intuition behind our approach is as follows. Let x be a variable appearing in a formula fi. Let T xi be an associated constraint tuple and V denote the support for x in T xi. Then, since constraints are in canonical form, for any other variable x′ ∈xi involved in (in)equality constraint with x with V ′ as the support, we have V = V ′Therefore, every pair of values vi, vj ∈V behave identically with respect to the constraint tuple T xi and hence, are symmetric to each other. Now, we could extend this notion to other constraints in which x appears provided the support sets {Vl}r l=1 of x in all such constraints are either identical or disjoint. We could treat each support set Vl for x as a symmetric group of constants which could be argued about in unison. In an unconstrained theory, there is a single disjoint partition of constants i.e. the entire domain, such that the constants behave identically. Our approach generalizes this idea to a groups of constants which behave identically with each other. Towards this end, we define following 2 key operations over the theory which will be used over and again during application of lifted inference rules. Partitioning Operation: We require the support sets of a variable (or sets of variables) over which lifted rule is being applied to be either identical or disjoint. We say that a theory M defined over a set of (logical) variables x is partitioned with respect to the variables in the set y ⊆x if for every pair of subset constraints Ax1 and Ax2, x1, x2 ∈y appearing in tuples of Sx the supports of Ax1 and Ax2 are either identical or disjoint (but not both). Given a partitioned theory with respect to variables y, we use Vy = {V y l }r l=1 to denote the set of various supports of variables in y. We refer to the set Vy as the partition of y values in M. Our partitioning algorithm considers all the support sets for variables in y and splits them such that all the splits are identical or disjoint. The constraint tuples can then be split and represented in terms of these fine-grained support sets. We refer the reader to the supplement section for a detailed description of our partitioning algorithm. Restriction Operation: Once the values of a set of variables y have been partitioned into a set {V y}r l=1, while applying the lifted inference rules, we will often need to argue about those formula groundings which are obtained by restricting y values to those in a particular set V y l (since values in each such support set behave identically to each other). Given x ∈y, let Ax l denote a subset constraint over x with V y l as its support. Given a formula fi we define its restriction to the set V y l as the formula obtained by replacing its associated constraint tuple T xi with a new constraint tuple of the form T xi V j Axj l where the conjunction is taken over each variable xj ∈y which also appears in fi. The restriction of an MLN M to the set Vl, denoted by M y l , is the MLN obtained by restricting each formula in M to the set Vl. Restriction operation can be implemented in a straightforward manner by taking conjunction with the subset constraints having the desired support set for variables in y. We next define the formulation of our lifting rules in a constrained theory. 4.2 Decomposer Let M be an MLN theory. Let x denote the set of variables appearing in M. Let Z(M) denotes the partition function for M. We say that an equivalence class ˆx is a decomposer [13] of M if a) if x ∈ˆx occurs in a formula f ∈F, then x appears in every predicate in f and b) If xi, xj ∈ˆx, then xi, xj do not appear as different arguments of any predicate P. Let ˆx be a decomposer for M. Let M ′ be a new theory in which the domain of all the variables belonging to equivalence class ˆx has been reduced to a single constant. The decomposer rule [13] states that the partition function Z(M) can be re-written using Z(M ′) as Z(M) = (Z(M ′))m, where m = |Dom(ˆx)| in M. The proof follows from the fact that since ˆx is a decomposer, the theory can be decomposed into m independent but identical (up to the renaming of a constant) theories which do not share any random variables [13]. Next, we will extend the decomposer rule above to work with the constrained theories. We will assume that the theory has been partitioned with respect to the set of variables appearing in the 5 decomposer ˆx. Let the partition of ˆx values in M be given by V ˆx = {V ˆx l }r l=1. Now, we define the decomposer rule for a constrained theory using the following theorem. Theorem 4.1. * Let M be a partitioned theory with respect to the decomposer ˆx. Let M ˆx l denote the restriction of M to the partition element V ˆx l . Let M ′ˆx l further restricts M ˆx l to a singleton {v} where v ∈V ˆx is some element in the set V ˆx. Then, the partition function Z(M) can be written as Z(M) = Πr l=1Z(M ˆx l ) = Πr l=1Z(M ′ˆx l )|V ˆx l | 4.3 Binomial Let M be an unconstrained MLN theory and P be a unary predicate. Let xj denote the set of variables appearing as first argument of P. Let Dom(xj) = {ci}n i=1, ∀xj ∈xj. Let M P k be the theory obtained from M as follows. Given a formula fi with weight wi in which P appears, wlog let xj denote the argument of P in fi. Then, for every such formula fi, we replace it by two new formulas, f t i and f f i , obtained by a) substituting true and false for the occurrence of P(xj) in fi, respectively, and b) when xj occurs in f t i or f f i , reducing the domain of xj to {ci}k i=1 in f t i and {ci}n i=k+1 in f f i where n = |Dom(xj)|. The weight wt i of f t i is equal to wi if it has an occurrence of xj, wi ∗k otherwise. Similarly, for f f i . The Binomial rule [13] states that the partition function Z(M) can be written as: Z(M) = Pn k=0 n k Z(M P k )) . The proof follows from the fact that calculation of Z can be divided into n + 1 cases, where each case corresponds to considering n k equivalent possibilities for k number of P groundings being true and n −k being false, k ranging from 0 to n. Next, we extend the above rule for a constrained theory M. Let P be singleton predicate and xj be set of variables appearing as first arguments of P as before. Let M be partitioned with respect to xj and Vxj = {V xj l }r l=1 denote the partition of xj values in M. Let F P denote the set of formulas in which P appears. For every formula fi ∈F P in which xj appears only in P(xj), assume that the projections over the set ¯xj are count preserving. Then, we obtain a new MLN M P l,k from M in the following manner. Given a formula fi ∈F P with weight wi in which P appears, do the following steps 1) restrict fi to the set of values {v|v /∈V xj l } for variable xj 2) for the remaining tuples (i.e. where xj takes the values from the set V xj l ), create two new formulas f t i and f f i obtained by restricting f t i to the set {V xj l1 , . . . V xj lk } and f f i to the set {V xj lk+1, . . . , V xj lnl }, respectively. Here, the subscript nl = |V xj l | 3) Canonicalize the constraints in f t i and f f i 4) Substitute true and false for P in f t i and f f i respectively 5) If xj appears in f t i (after the substitution), its weight wt i is equal to wi, otherwise split f t i into {f t id}D d=1 such that projection over ¯xj in each tuple of f t id is count preserving with extension count given by et ld. The weight of each f t id is wi ∗et ld. Similarly, for f f i . We are now ready to define the Binomial formulation for a constrained theory: Theorem 4.2. * Let M be an MLN theory partitioned with respect to variable xj. Let P(xj) be a singleton predicate. Let the projections T ¯ xj of tuples associated with the formulas in which xj appears only in P(xj) be count preserving. Let Vxj = {V xj l }r l=1 denote the partition of xj values in M and let nl = |V xj l |. Then, the partition function Z(M) can be computed using the recursive application of the following rule for each l: Z(M) = nl X k=0 nl k Z(M P l,k)) We apply Theorem 4.2 recursively for each partition component in turn to eliminate P(xj) completely from the theory. The Binomial application as described above involves Qr l=1(nl + 1) computations of Z whereas a direct grounding method would involve 2 P l nl computations (two possibilities for each grounding of P(xj) in turn). See the supplement for an example. 4.4 Normal Forms and Evidence Processing Normal Forms: Normal form representation [13] is an unconstrained representation which requires that a) there are no constants in any formula fl ∈F b) the domain of variables belonging to an equivalence class ˆx are identical to each other. An (unconstrained) MLN theory with evidence can be converted into normal form by a series of mechanical operations in time polynomial in the size 6 Domain Source Rules Type (# Evidence of const.) Friends & Alchemy Smokes(p) ⇒Cancer(p); Smokes(p1) person (var) Smokes Smokers (FS) [5] ∧Friends(p1,p2) ⇒Smokes(p2) Cancer WebKB Alchemy PageClass(p1,+c1) ∧PageClass(p2,+c2) page (271) PageClass [25],[24] ⇒Links(p1,p2) class (5) IMDB Alchemy Director(p) ⇒!WorksFor(p1,p2) person(278) Actor [16] Actor(p) ⇒!Director(p); Movie(m,p1) movie (20) Director ∧WorksFor(p1,p2) ⇒Movie(m,p2) Movie Table 2: Dataset Details. var: domain size varied. ’+’: a separate weight learned for each grounding of the theory and the evidence [13, 18]. Any variable values appearing as a constant in a formula or in evidence is split apart from the rest of the domain and a new variable with singleton domain created for them. Constrained theories can be normalized in a similar manner by 1) splitting apart those variables appearing any subset constraints. 2) simple variable substitution for equality and 3) introducing explicit evidence predicates for inequality. We can now state the following theorem. Theorem 4.3. * Let M be a constrained MLN theory. The application of the modified lifting rules over this constrained theory can be exponentially more efficient than first converting the theory in the normal form and then applying the original formulation of the lifting rules. Evidence Processing: Given a predicate Pj(x1, . . . , xk) let Ej denote its associated evidence. Further, Et j (Ef j ) denote the set of ground atoms of Pj which are assigned true (false) in evidence. Let Eu j denote the set of groundings which are unknown (neither true nor false.) Note that the set Eu j is implicitly specified. The first step in processing evidence is to convert the sets Et j and Ef j into the constraint representation form for every predicate Pj. This is done by using the hypercube representation [24] over the set of variables appearing in predicate Pj. A hypercube over a set of variables can be seen as a constraint tuple specifying a subset constraint over each variable in the set. A union of hypercubes represents a constraint set representing the union of corresponding constraint tuples. Finding a minimal hypercube decomposition in NP-hard and we employ the greedy top-down hypercube construction algorithm as proposed Singla et al. [24] (Algorithm 2). The constraint representation for the implicit set Eu j can be obtained by eliminating the set Et j ∪Ef j from its bounding hypercube (i.e. one which includes all the groundings in the set) and then calling the hypercube construction algorithm over the remaining set. Once the constraint representation has been created for every set of evidence (and non-evidence) atoms, we join them together to obtain the constrained representation. The join over constraints is implemented as described in Section 3. 5 Experiments In our experiments, we compared the performance of our constrained formulation of lifting rules with the normal forms for the task of calculating the partition function Z. We refer to our approach as SetInEq and normal forms as Normal. We also compared with PTP [11] available in Alchemy 2 and GCFVOE [25] system. 1 Both our systems and GCFOVE are implemented in Java. PTP is implemented in C++. We experimented on four benchmark MLN domains for calculating the partition function using exact as well as approximate inference. Table 2 shows the details of our datasets. Details for one of the domains Professor and Students (PS) [11] are presented in supplement due to lack of space. Evidence was the only type of constraint considered in our experiments. The experiments on all the datasets except WebKB were carried on a machine with 2.20GHz Intel Core i3 CPU and 4GB RAM. WebKB is a much larger dataset and we ran the experiments on 2.20 GHz Xeon(R) E5-2660 v2 server with 10 cores and 128 GB RAM. 5.1 Exact Inference We compared the performance of the various algorithms using exact inference on two of the domains: FS and PS. We do not compare the value of Z since we are dealing with exact inference In the following, r% evidence on a type means that r% of the constants of the type are randomly selected and evidence predicate groundings in which these constants appear are randomly set to true or false. Remaining evidence groundings are set to unknown. y-axis is plotted on log scale in the following 3 graphs. Figure 1a shows the results as the domain size of person is varied from 100 to 800 with 40% evidence in the FS domain. We timed out an algorithm after 1 hour. PTP failed to 1Alchemy-2:code.google.com/p-alchemy-2,GCFOVE: https:dtai.cs.kuleuven.be/software/gcfove 7 scale to even 100 size and are not shown in the figure. The time taken by Normal grows very fast and it times out after 500 size. SetInEq and GCFOVE have a much slower growth rate. SetInEq is about an order of magnitude faster than GCFVOE on all domain sizes. Figure 1b shows the time taken by the three algorithms as we vary the evidence on person with a fixed domain size of 500. For all the algorithms, the time first increases with evidence and then drops. SetInEq is up to an order of magnitude faster than GCFVOE and upto 3 orders of magnitude faster than Normal. Figure 1c plots the number of nodes expanded by Normal and SetInEQ. GCFOVE code did not provide any such equivalent value. As expected, we see much larger growth rate for Normal compared to SetInEq. 1 10 100 1000 10000 100 200 300 400 500 600 700 800 Time (in seconds) Domain Size SetInEq Normal GCFOVE (a) FS: size vs time (sec). 1 10 100 1000 10000 0 20 40 60 80 100 Time (in seconds) Evidence % SetInEq Normal GCFOVE (b) FS: evidence vs time (sec) 1000 10000 100000 1e+06 1e+07 1e+08 100 200 300 400 500 600 700 800 No. of nodes Domain Size SetInEq Normal (c) FS: size vs # nodes expanded Figure 1: Results for exact inference on FS 5.2 Approximate Inference 100 1000 10000 100000 50 100 150 200 250 300 Time (in seconds) Domain Size SetInEq Normal (a) WebKB: size vs time (sec) 0 100 200 300 400 500 600 700 0 20 40 60 80 100 Time (in seconds) Evidence % SetInEq Normal (b) IMDB: evidence % vs time (sec) Figure 2: Results using approximate inference on WebKB and IMDB For approximate inference, we could only compare Normal with SetInEq. GCFOVE does not have an approximate variant for computing marginals or partition function. PTP using importance sampling is not fully implemented in Alchemy 2. For approximate inference in both Normal and SetInEq, we used the unbiased importance sampling scheme as described by Gogate & Domingos [11]. We collected a total of 1000 samples for each estimate and averaged the Z values. In all our experiments below, the log(Z) values calculated by the two algorithms were within 1% of each other hence, the estimates are comparable with other. We compared the performance of the two algorithms on two real world datasets IMDB and WebKB (see Table 2). For WebKB, we experimented with 5 most frequent page classes in Univ. of Texas fold. It had close to 2.5 million ground clauses. IMDB has 5 equal sized folds with close to 15K groundings in each. The results presented are averaged over the folds. Figure 2a (y-axis on log scale) shows the time taken by two algorithms as we vary the subset of pages in our data from 0 to 270. The scaling behavior is similar to as observed earlier for datasets. Figure 2b plots the timing of the two algorithms as we vary the evidence % on IMDB. SetInEq is able to exploit symmetries with increasing evidence whereas Normal’s performance degrades. 6 Conclusion and Future work In this paper, we proposed a new constraint language called SetInEq for relational probabilistic models. Our constraint formalism subsumes most existing formalisms. We defined efficient operations over our language using a canonical form representation and extended 3 key lifting rules i.e., decomposer, binomial and single occurrence to work with our constraint formalism. Experiments on benchmark MLNs validate the efficacy of our approach. Directions for future work include exploiting our constraint formalism to facilitate approximate lifting of the theory. 7 Acknowledgements Happy Mittal was supported by TCS Research Scholar Program. Vibhav Gogate was partially supported by the DARPA Probabilistic Programming for Advanced Machine Learning Program under AFRL prime contract number FA8750-14-C-0005. Parag Singla is being supported by Google travel grant to attend the conference. We thank Somdeb Sarkhel for helpful discussions. 8 References [1] Udi Apsel, Kristian Kersting, and Martin Mladenov. Lifting relational map-lps using cluster signatures. In Proc. of AAAI-14, pages 2403–2409, 2014. [2] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted variational inference. In Proc. of UAI-13, pages 132–141, 2013. [3] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. In Proc. of IJCAI-05, pages 1319–1325, 2005. [4] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. In L. Getoor and B. Taskar, editors, Introduction to Statistical Relational Learning. MIT Press, 2007. [5] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2009. [6] G. Van den Broeck. On the completeness of first-order knowledge compilation for lifted probabilistic inference. In Proc. of NIPS-11, pages 1386–1394, 2011. [7] G. Van den Broeck. Lifted Inference and Learning in Statistical Relational Models. PhD thesis, KU Leuven, 2013. [8] G. Van den Broeck. On the complexity and approximation of binary evidence in lifted inference. In Proc. of NIPS-13, 2013. [9] G. Van den Broeck and J. Davis. Conditioning in firsr-order knowledge compilation and lifted probabilistic inference. In Proc. of AAAI-12, 2012. [10] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted probabilistic inference by first-order knowledge compilation. In Proc. of IJCAI-11, 2011. [11] V. Gogate and P. Domingos. Probabilisitic theorem proving. In Proc. of UAI-11, pages 256–265, 2011. [12] V. Gogate, A. Jha, and D. Venugopal. Advances in lifted importance sampling. In Proc. of AAAI-12, pages 1910–1916, 2012. [13] A. K. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted inference seen from the other side : The tractable features. In Proc. of NIPS-10, pages 973–981, 2010. [14] K. Kersting, B. Ahmadi, and S. Natarajan. Counting belief propagation. In Proc. of UAI-09, pages 277–284, 2009. [15] J. Kisy´nski and D. Poole. Constraint processing in lifted probabilistic inference. In Proc. of UAI-09, 2009. [16] L. Mihalkova and R. Mooney. Bottom-up learning of Markov logic network structure. In Proceedings of the Twenty-Forth International Conference on Machine Learning, pages 625–632, 2007. [17] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaebling. Lifted probabilistic inference with counting formulas. In Proc. of AAAI-08, 2008. [18] H. Mittal, P. Goyal, V. Gogate, and P. Singla. New rules for domain independent lifted MAP inference. In Proc. of NIPS-14, pages 649–657, 2014. [19] M. Mladenov, A. Globerson, and K. Kersting. Lifted message passing as reparametrization of graphical models. In Proc. of UAI-14, pages 603–612, 2014. [20] M. Mladenov and K. Kersting. Equitable partitions of concave free energies. In Proc. of UAI-15, 2015. [21] D. Poole. First-order probabilistic inference. In Proc. of IJCAI-03, pages 985–991, 2003. [22] S. J. Russell and P. Norvig. Artificial Intelligence - A Modern Approach (3rd edition). Pearson Education, 2010. [23] P. Singla and P. Domingos. Lifted first-order belief propagation. In Proc. of AAAI-08, pages 1094–1099, 2008. [24] P. Singla, A. Nath, and P. Domingos. Approximate lifted belief propagation. In Proc. of AAAI-14, pages 2497–2504, 2014. [25] N. Taghipour, D. Fierens, J. Davis, and H. Blockeel. Lifted variable elimination with arbitrary constraints. In Proc. of AISTATS-12, Canary Islands, Spain, 2012. [26] D. Venugopal and V. Gogate. On lifting the Gibbs sampling algorithm. In Proc. of NIPS-12, pages 1664–1672, 2012. 9 | 2015 | 74 |
5,966 | LASSO with Non-linear Measurements is Equivalent to One With Linear Measurements Christos Thrampoulidis, Department of Electrical Engineering Caltech cthrampo@caltech.edu Ehsan Abbasi Department of Electrical Engineering Caltech eabbasi@caltech.edu Babak Hassibi Department of Electrical Engineering Caltech hassibi@caltech.edu ∗ Abstract Consider estimating an unknown, but structured (e.g. sparse, low-rank, etc.), signal x0 ∈Rn from a vector y ∈Rm of measurements of the form yi = gi(aiT x0), where the ai’s are the rows of a known measurement matrix A, and, g(·) is a (potentially unknown) nonlinear and random link-function. Such measurement functions could arise in applications where the measurement device has nonlinearities and uncertainties. It could also arise by design, e.g., gi(x) = sign(x+zi), corresponds to noisy 1-bit quantized measurements. Motivated by the classical work of Brillinger, and more recent work of Plan and Vershynin, we estimate x0 via solving the Generalized-LASSO, i.e., ˆx := arg minx ∥y −Ax0∥2 + λf(x) for some regularization parameter λ > 0 and some (typically non-smooth) convex regularizer f(·) that promotes the structure of x0, e.g. ℓ1-norm, nuclear-norm, etc. While this approach seems to naively ignore the nonlinear function g(·), both Brillinger (in the non-constrained case) and Plan and Vershynin have shown that, when the entries of A are iid standard normal, this is a good estimator of x0 up to a constant of proportionality µ, which only depends on g(·). In this work, we considerably strengthen these results by obtaining explicit expressions for∥ˆx−µx0∥2, for the regularized Generalized-LASSO, that are asymptotically precise when m and n grow large. A main result is that the estimation performance of the Generalized LASSO with non-linear measurements is asymptotically the same as one whose measurements are linear yi = µaiT x0 + σzi, with µ = Eγg(γ) and σ2 = E(g(γ) −µγ)2, and, γ standard normal. To the best of our knowledge, the derived expressions on the estimation performance are the first-known precise results in this context. One interesting consequence of our result is that the optimal quantizer of the measurements that minimizes the estimation error of the Generalized LASSO is the celebrated Lloyd-Max quantizer. 1 Introduction Non-linear Measurements. Consider the problem of estimating an unknown signal vector x0 ∈Rn from a vector y = (y1, y2, . . . , ym)T of m measurements taking the following form: yi = gi(aT i x0), i = 1, 2, . . . , m. (1) Here, each ai represents a (known) measurement vector. The gi’s are independent copies of a (generically random) link function g. For instance, gi(x) = x + zi, with say zi being normally ∗This work was supported in part by the National Science Foundation under grants CNS-0932428, CCF-1018927, CCF-1423663 and CCF-1409204, by a grant from Qualcomm Inc., by NASA’s Jet Propulsion Laboratory through the President and Directors Fund, by King Abdulaziz University, and by King Abdullah University of Science and Technology. 1 distributed, recovers the standard linear regression setup with gaussian noise. In this paper, we are particularly interested in scenarios where g is non-linear. Notable examples include g(x) = sign(x) (or gi(x) = sign(x+zi)) and g(x) = (x)+, corresponding to 1-bit quantized (noisy) measurements, and, to the censored Tobit model, respectively. Depending on the situation, g might be known or unspecified. In the statistics and econometrics literature, the measurement model in (1) is popular under the name single-index model and several aspects of it have been well-studied, e.g. [4,5,14,15]1. Structured Signals. It is typical that the unknown signal x0 obeys some sort of structure. For instance, it might be sparse, i.e. only a few k ≪n, of its entries are non-zero; or, it might be that x0 = vec(X0), where X0 ∈R √n×√n is a matrix of low-rank r ≪n. To exploit this information it is typical to associate with the structure of x0 a properly chosen function f : Rn →R, which we refer to as the regularizer. Of particular interest are convex and non-smooth such regularizers, e.g. the ℓ1-norm for sparse signals, the nuclear-norm for low-rank ones, etc. Please refer to [1,6,13] for further discussions. An Algorithm for Linear Measurements: The Generalized LASSO. When the link function is linear, i.e. gi(x) = x + zi, perhaps the most popular way of estimating x0 is via solving the Generalized LASSO algorithm: ˆx := arg min x ∥y −Ax∥2 + λf(x). (2) Here, A = [a1, a2, . . . , am]T ∈Rm×n is the known measurement matrix and λ > 0 is a regularizer parameter. This is often referred to as the ℓ2-LASSO or the square-root-LASSO [3] to distinguish from the one solving minx 1 2∥y −Ax∥2 2 + λf(x), instead. Our results can be accustomed to this latter version, but for concreteness, we restrict attention to (2) throughout. The acronym LASSO for (2) was introduced in [22] for the special case of ℓ1-regularization; (2) is a natural generalization to other kinds of structures and includes the group-LASSO [25], the fused-LASSO [23] as special cases. We often drop the term “Generalized” and refer to (2) simply as the LASSO. One popular, measure of estimation performance of (2) is the squared-error ∥ˆx −x0∥2 2. Recently, there have been significant advances on establishing tight bounds and even precise characterizations of this quantity, in the presence of linear measurements [2, 10, 16, 18, 19, 21]. Such precise results have been core to building a better understanding of the behavior of the LASSO, and, in particular, on the exact role played by the choice of the regularizer f (in accordance with the structure of x0), by the number of measurements m, by the value of λ, etc.. In certain cases, they even provide us with useful insights into practical matters such as the tuning of the regularizer parameter. Using the LASSO for Non-linear Measurements?. The LASSO is by nature tailored to a linear model for the measurements. Indeed, the first term of the objective function in (2) tries to fit Ax to the observed vector y presuming that this is of the form yi = aT i x0 +noise. Of course, no one stops us from continuing to use it even in cases where yi = g(aT i x0) with g being non-linear2. But, the question then becomes: Can there be any guarantees that the solution ˆx of the Generalized LASSO is still a good estimate of x0? The question just posed was first studied back in the early 80’s by Brillinger [5] who provided answers in the case of solving (2) without a regularizer term. This, of course, corresponds to standard Least Squares (LS). Interestingly, he showed that when the measurement vectors are Gaussian, then the LS solution is a consistent estimate of x0, up to a constant of proportionality µ, which only depends on the link-function g. The result is sharp, but only under the assumption that the number of measurements m grows large, while the signal dimension n stays fixed, which was the typical setting of interest at the time. In the world of structured signals and high-dimensional measurements, the problem was only very recently revisited by Plan and Vershynin [17]. They consider a constrained version of the Generalized LASSO, in which the regularizer is essentially replaced by a constraint, and derive upper bounds on its performance. The bounds are not tight (they involve absolute constants), but they demonstrate some key features: i) the solution to the constrained LASSO ˆx is a good estimate of x0 up to the same constant of proportionality µ that appears in Brillinger’s result. ii) Thus, ∥ˆx −µx0∥2 2 is a natural measure of performance. iii) Estimation is possible even with m < n measurements by taking advantage of the structure of x0. 1 The single-index model is a classical topic and can also be regarded as a special case of what is known as sufficient dimension reduction problem. There is extensive literature on both subjects; unavoidably, we only refer to the directly relevant works here. 2Note that the Generalized LASSO in (2) does not assume knowledge of g. All that is assumed is the availability of the measurements yi. Thus, the link-function might as well be unknown or unspecified. 2 λ 0 0.5 1 1.5 2 2.5 3 ∥µ−1ˆx −x0∥2 2 0 0.5 1 1.5 2 2.5 3 Non-linear Linear Prediction m > n m < n Figure 1: Squared error of the ℓ1-regularized LASSO with non-linear measurements (□) and with corresponding linear ones (⋆) as a function of the regularizer parameter λ; both compared to the asymptotic prediction. Here, gi(x) = sign(x + 0.3zi) with zi ∼N(0, 1). The unknown signal x0 is of dimension n = 768 and has ⌈0.15n⌉non-zero entries (see Sec. 2.2.2 for details). The different curves correspond to ⌈0.75n⌉and ⌈1.2n⌉ number of measurements, respectively. Simulation points are averages over 20 problem realizations. 1.1 Summary of Contributions Inspired by the work of Plan and Vershynin [17], and, motivated by recent advances on the precise analysis of the Generalized LASSO with linear measurements, this paper extends these latter results to the case of non-linear mesaurements. When the measurement matrix A has entries i.i.d. Gaussian (henceforth, we assume this to be the case without further reference), and the estimation performance is measured in a mean-squared-error sense, we are able to precisely predict the asymptotic behavior of the error. The derived expression accurately captures the role of the link function g, the particular structure of x0, the role of the regularizer f, and, the value of the regularizer parameter λ. Further, it holds for all values of λ, and for a wide class of functions f and g. Interestingly, our result shows in a very precise manner that in large dimensions, modulo the information about the magnitude of x0, the LASSO treats non-linear measurements exactly as if they were scaled and noisy linear measurements with scaling factor µ and noise variance σ2 defined as µ := E[γg(γ)], and σ2 := E[(g(γ) −µγ)2], for γ ∼N(0, 1), (3) where the expecation is with respect to both γ and g. In particular, when g is such that µ ̸= 03, then, the estimation performance of the Generalized LASSO with measurements of the form yi = gi(aT i x0) is asymptotically the same as if the measurements were rather of the form yi = µaT i x0 + σzi, with µ, σ2 as in (3) and zi standard gaussian noise. Recent analysis of the squared-error of the LASSO, when used to recover structured signals from noisy linear observations, provides us with either precise predictions (e.g. [2,20]), or in other cases, with tight upper bounds (e.g. [10, 16]). Owing to the established relation between non-linear and (corresponding) linear measurements, such results also characterize the performance of the LASSO in the presence of nonlinearities. We remark that some of the error formulae derived here in the general context of non-linear measurements, have not been previously known even under the prism of linear measurements. Figure 1 serves as an illustration; the error with non-linear measurements matches well with the error of the corresponding linear ones and both are accurately predicted by our analytic expression. Under the generic model in (1), which allows for g to even be unspecified, x0 can, in principle, be estimated only up to a constant of proportionality [5,15,17]. For example, if g is uknown then any information about the norm ∥x0∥2 could be absorbed in the definition of g. The same is true when g(x) = sign(x), eventhough g might be known here. In these cases, what becomes important is the direction of x0. Motivated by this, and, in order to simplify the presentation, we have assumed throughout that x0 has unit Euclidean norm4, i.e. ∥x0∥2 = 1. 3This excludes for example link functions g that are even, but also some other not so obvious cases [11, Sec. 2.2]. For a few special cases, e.g. sparse recovery with binary measurements yi [24], different methodologies than the LASSO have been recently proposed that do not require µ = 0. 4In [17, Remark 1.8], they note that their results can be easily generalized to the case when ∥x0∥2 ̸= 1 by simply redifining ¯g(x) = g(∥x0∥2x) and accordingly adjusting the values of the parameters µ and σ2 in (3). The very same argument is also true in our case. 3 1.2 Discussion of Relevant Literature Extending an Old Result. Brillinger [5] identified the asymptotic behavior of the estimation error of the LS solution ˆxLS = (AT A)−1AT y by showing that, when n (the dimension of x0) is fixed, lim m→∞ √m∥ˆxLS −µx0∥2 = σ√n, (4) where µ and σ2 are same as in (3). Our result can be viewed as a generalization of the above in several directions. First, we extend (4) to the regime where m/n = δ ∈(1, ∞) and both grow large by showing that lim n→∞∥ˆxLS −µx0∥2 = σ √ δ −1. (5) Second, and most importantly, we consider solving the Generalized LASSO instead, to which LS is only a very special case. This allows versions of (5) where the error is finite even when δ < 1 (e.g., see (8)). Note the additional challenges faced when considering the LASSO: i) ˆx no longer has a closed-form expression, ii) the result needs to additionally capture the role of x0, f, and, λ. Motivated by Recent Work. Plan and Vershynin consider a constrained Generalized LASSO: ˆxC-LASSO = arg min x∈K ∥y −Ax∥2, (6) with y as in (1) and K ⊂Rn some known set (not necessarily convex). In its simplest form, their result shows that when m ≳DK(µx0) then with high probability, ∥ˆxC-LASSO −µx0∥2 ≲σ p DK(µx0) + ζ √m . (7) Here, DK(µx0) is the Gaussian width, a specific measure of complexity of the constrained set K when viewed from µx0. For our purposes, it suffices to remark that if K is properly chosen, and, if µx0 is on the boundary of K, then DK(µx0) is less than n. Thus, estimation is in principle is possible with m < n measurements. The parameters µ and σ that appear in (7) are the same as in (3) and ζ := E[(g(γ) −µγ)2γ2]. Observe that, in contrast to (4) and to the setting of this paper, the result in (7) is non-asymptotic. Also, it suggests the critical role played by µ and σ. On the other hand, (7) is only an upper bound on the error, and also, it suffers from unknown absolute proportionality constants (hidden in ≲). Moving the analysis into an asymptotic setting, our work expands upon the result of [17]. First, we consider the regularized LASSO instead, which is more commonly used in practice. Most importantly, we improve the loose upper bounds into precise expressions. In turn, this proves in an exact manner the role played by µ and σ2 to which (7) is only indicative. For a direct comparison with (7) we mention the following result which follows from our analysis (we omit the proof for brevity). Assume K is convex, m/n = δ ∈(0, ∞), DK(µx0)/n = ρ ∈(0, 1] and n →∞. Also, δ > ρ. Then, (7) yields an upper bound Cσ p ρ/δ to the error, for some constant C > 0. Instead, we show ∥ˆxC-LASSO −µx0∥2 ≤σ √ρ √δ −ρ. (8) Precise Analysis of the LASSO With Linear Measurements. The first precise error formulae were established in [2, 10] for the ℓ2 2-LASSO with ℓ1-regularization. The analysis was based on the the Approximate Message Passing (AMP) framework [9]. A more general line of work studies the problem using a recently developed framework termed the Convex Gaussian Min-max Theorem (CGMT) [19], which is a tight version of a classical Gaussian comparison inequality by Gordon [12]. The CGMT framework was initially used by Stojnic [18] to derive tight upper bounds on the constrained LASSO with ℓ1-regularization; [16] generalized those to general convex regularizers and also to the ℓ2-LASSO; the ℓ2 2-LASSO was studied in [21]. Those bounds hold for all values of SNR, but they become tight only in the high-SNR regime. A precise error expression for all values of SNR was derived in [20] for the ℓ2-LASSO with ℓ1-regularization under a gaussianity assumption on the distribution of the non-zero entries of x0. When measurements are linear, our Theorem 2.3 generalizes this assumption. Moreover, our Theorem 2.2 provides error predictions for regularizers going beyond the ℓ1-norm, e.g. ℓ1,2-norm, nuclear norm, which appear to be novel. When it comes to non-linear measurements, to the best of our knowledge, this paper is the first to derive asymptotically precise results on the performance of any LASSO-type program. 2 Results 2.1 Modeling Assumptions Unknown structured signal. We let x0 ∈Rn represent the unknown signal vector. We assume that x0 = x0/∥x0∥2, with x0 sampled from a probability density px0 in Rn. Thus, x0 is deterministically 4 of unit Euclidean-norm (this is mostly to simplify the presentation, see Footnote 4). Information about the structure of x0 (and correspondingly of x0) is encoded in px0. E.g., to study an x0 which is sparse, it is typical to assume that its entries are i.i.d. x0,i ∼(1 −ρ)δ0 + ρqX0, where ρ ∈(0, 1) becomes the normalized sparsity level, qX0 is a scalar p.d.f. and δ0 is the Dirac delta function5. Regularizer. We consider convex regularizers f : Rn →R. Measurement matrix. The entries of A ∈Rm×n are i.i.d. N(0, 1). Measurements and Link-function. We observe y = ⃗g(Ax0) where ⃗g is a (possibly random) map from Rm to Rm and ⃗g(u) = [g1(u1), . . . , gm(um)]T . Each gi is i.i.d. from a real valued random function g for which µ and σ2 are defined in (3). We assume that µ and σ2 are nonzero and bounded. Asymptotics. We study a linear asymptotic regime. In particular, we consider a sequence of problem instances {x(n) 0 , A(n), f (n), m(n)}n∈N indexed by n such that A(n) ∈Rm×n has entries i.i.d. N(0, 1), f (n) : Rn →R is proper convex, and, m := m(n) with m = δn, δ ∈(0, ∞). We further require that the following conditions hold: (a) x(n) 0 is sampled from a probability density p(n) x0 in Rn with one-dimensional marginals that are independent of n and have bounded second moments. Furthermore, n−1∥x(n) 0 ∥2 2 P−→σ2 x = 1. (b) For any n ∈N and any ∥x∥2 ≤C, it holds n−1/2f(x) ≤c1 and n−1/2 maxs∈∂f (n)(x) ∥s∥2 ≤ c2, for constants c1, c2, C ≥0 independent of n. In (a), we used “ P−→” to denote convergence in probability as n →∞. The assumption σ2 x = 1 holds without loss of generality, and, is only necessary to simplify the presentation. In (b), ∂f(x) denotes the subdifferential of f at x. The condition itself is no more than a normalization condition on f. Every such sequence {x(n) 0 , A(n), f (n)}n∈N generates a sequence {x(n) 0 , y(n)}n∈N where x(n) 0 := x(n) 0 /∥x(n) 0 ∥2 and y(n) := ⃗g(n)(Ax0). When clear from the context, we drop the superscript (n). 2.2 Precise Error Prediction Let {x(n) 0 , A(n), f (n), y(n)}n∈N be a sequence of problem instances that satisfying all the conditions above. With these, define the sequence {ˆx(n)}n∈N of solutions to the corresponding LASSO problems for fixed λ > 0: ˆx(n) := min x 1 √n n ∥y(n) −A(n)x∥2 + λf (n)(x) o . (9) The main contribution of this paper is a precise evaluation of limn→∞∥µ−1ˆx(n) −x(n) 0 ∥2 2 with high probability over the randomness of A, of x0, and of g. 2.2.1 General Result To state the result in a general framework, we require a further assumption on p(n) x0 and f (n). Later in this section we illustrate how this assumption can be naturally met. We write f ∗for the Fenchel’s conjugate of f, i.e., f ∗(v) := supx xT v −f(x); also, we denote the Moreau envelope of f at v with index τ to be ef,τ(v) := minx{ 1 2∥v −x∥2 2 + τf(x)}. Assumption 1. We say Assumption 1 holds if for all non-negative constants c1, c2, c3 ∈R the point-wise limit of 1 ne√n(f ∗)(n),c3 (c1h + c2x0) exists with probability one over h ∼N(0, In) and x0 ∼p(n) x0 . Then, we denote the limiting value as F(c1, c2, c3). Theorem 2.1 (Non-linear=Linear). Consider the asymptotic setup of Section 2.1 and let Assumption 1 hold. Recall µ and σ2 as in (3) and let ˆx be the minimizer of the Generalized LASSO in (9) for fixed λ > 0 and for measurements given by (1). Further let ˆxlin be the solution to the Generalized LASSO when used with linear measurements of the form ylin = A(µx0) + σz, where z has entries i.i.d. standard normal. Then, in the limit of n →∞, with probability one, ∥ˆx −µx0∥2 2 = ∥ˆxlin −µx0∥2 2. 5Such models have been widely used in the relevant literature, e.g. [7,8,10]. In fact, the results here continue to hold as long as the marginal distribution of x0 converges to a given distribution (as in [2]). 5 Theorem 2.1 relates in a very precise manner the error of the Generalized LASSO under non-linear measurements to the error of the same algorithm when used under appropriately scaled noisy linear measurements. Theorem 2.2 below, derives an asymptotically exact expression for the error. Theorem 2.2 (Precise Error Formula). Under the same assumptions of Theorem 2.1 and δ := m/n, it holds, with probability one, lim n→∞∥ˆx −µx0∥2 2 = α2 ∗, where α∗is the unique optimal solution to the convex program max 0≤β≤1 τ≥0 min α≥0 β √ δ p α2 + σ2 −ατ 2 + µ2τ 2α −αλ2 τ F β λ, µτ λα, τ λα . (10) Also, the optimal cost of the LASSO in (9) converges to the optimal cost of the program in (10). Under the stated conditions, Theorem 2.2 proves that the limit of ∥ˆx −µx0∥2 exists and is equal to the unique solution of the optimization program in (10). Notice that this is a deterministic and convex optimization, which only involves three scalar optimization variables. Thus, the optimal α∗ can, in principle, be efficiently numerically computed. In many specific cases of interest, with some extra effort, it is possible to yield simpler expressions for α∗, e.g. see Theorem 2.3 below. The role of the normalized number of measurement δ = m/n, of the regularizer parameter λ, and, that of g, through µ and σ2, are explicit in (10); the structure of x0 and the choice of the regularizer f are implicit in F. Figures 1-2 illustrate the accuracy of the prediction of the theorem in a number of different settings. The proofs of both the Theorems are deferred to Appendix A. In the next sections, we specialize Theorem 2.2 to the cases of sparse, group-sparse and low-rank signal recovery. 2.2.2 Sparse Recovery Assume each entry x0,i, i = 1, . . . , n is sampled i.i.d. from a distribution pX0(x) = (1 −ρ) · δ0(x) + ρ · qX0(x), (11) where δ0 is the delta Dirac function, ρ ∈(0, 1) and qX0 a probability density function with second moment normalized to 1/ρ so that condition (a) of Section 2.1 is satisfied. Then, x0 = x0/∥x0∥2 is ρn-sparse on average and has unit Euclidean norm. Letting f(x) = ∥x∥1 also satisfies condition (b). Let us now check Assumption 1. The Fenchel’s conjugate of the ℓ1-norm is simply the indicator function of the ℓ∞unit ball. Hence, without much effort, 1 ne√n(f ∗)(n),c3 (c1h + c2x0) = 1 2n n X i=1 min |vi|≤1(vi −(c1hi + c2x0,i))2 = 1 2n n X i=1 η2(c1hi + c2x0,i; 1), (12) where we have denoted η(x; τ) := (x/|x|) (|x| −τ)+ (13) for the soft thresholding operator. An application of the weak law of large numbers to see that the limit of the expression in (12) equals F(c1, c2, c3) := 1 2E η2(c1h + c2X0; 1) , where the expectation is over h ∼N(0, 1) and X0 ∼pX0. With all these, Theorem 2.2 is applicable. We have put extra effort in order to obtain the following equivalent but more insightful characterization of the error, as stated below and proved in Appendix B. Theorem 2.3 (Sparse Recovery). If δ > 1, then define λcrit = 0. Otherwise, let λcrit, κcrit be the unique pair of solutions to the following set of equations: ( κ2δ = σ2 + E (η(κh + µX0; κλ) −µX0)2 , (14) κδ = E[(η(κh + µX0; κλ) · h)], (15) where h ∼N(0, 1) and is independent of X0 ∼pX0. Then, for any λ > 0, with probability one, lim n→∞∥ˆx −µx0∥2 2 = δκ2 crit −σ2 , λ ≤λcrit, δκ2 ∗(λ) −σ2 , λ ≥λcrit, where κ2 ∗(λ) is the unique solution to (14). 6 λ 0.5 1 1.5 2 2.5 ∥µ−1x −x0∥2 2 0.5 1 1.5 2 Sparse signal recovery Simulation Thm. 2.3 δ = 0.75 δ = 1.2 λcrit λ 0.5 1 1.5 2 2.5 3 3.5 4 4.5 ∥x −µx0∥2 2 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 Group-sparse signal recovery Simulation Thm. 2.2 Figure 2: Squared error of the LASSO as a function of the regularizer parameter compared to the asymptotic predictions. Simulation points represent averages over 20 realizations. (a) Illustration of Thm. 2.3 for g(x) = sign(x), n = 512, pX0(+1) = pX0(+1) = 0.05, pX0(+1) = 0.9 and two values of δ, namely 0.75 and 1.2. (b) Illustration of Thm. 2.2 for x0 being group-sparse as in Section 2.2.3 and gi(x) = sign(x + 0.3zi). In particular, x0 is composed of t = 512 blocks of block size b = 3. Each block is zero with probability 0.95, otherwise its entries are i.i.d. N(0, 1). Finally, δ = 0.75. Figures 1 and 2(a) validate the prediction of the theorem, for different signal distributions, namely qX0 being Gaussian and Bernoulli, respectively. For the case of compressed (δ < 1) measurements, observe the two different regimes of operation, one for λ ≤λcrit and the other for λ ≥λcrit, precisely as they are predicted by the theorem (see also [16, Sec. 8]). The special case of Theorem 2.3 for which qX0 is Gaussian has been previously studied in [20]. Otherwise, to the best of our knowledge, this is the first precise analysis result for the ℓ2-LASSO stated in that generality. Analogous result, but via different analysis tools, has only been known for the ℓ2 2-LASSO as appears in [2]. 2.2.3 Group-Sparse Recovery Let x0 ∈Rn be composed of t non-overlapping blocks of constant size b each such that n = t · b. Each block [x0]i, i = 1, . . . , t is sampled i.i.d. from a probability density in Rb: pX0(x) = (1 − ρ) · δ0(x) + ρ · qX0(x), x ∈Rb, where ρ ∈(0, 1). Thus, x0 is ρt-block-sparse on average. We operate in the regime of linear measurements m/n = δ ∈(0, ∞). As is common we use the ℓ1,2-norm to induce block-sparsity, i.e., f(x) = Pt i=1 ∥[x0]i∥2; with this, (9) is often referred to as group-LASSO in the literature [25]. It is not hard to show that Assumption 1 holds with F(c1, c2, c3) := 1 2bE ∥⃗η(c1h + c2X0; 1)∥2 2 , where ⃗η(x; τ) = x/∥x∥(∥x∥2 −τ)+ , x ∈Rb is the vector soft thresholding operator and h ∼N(0, Ib), X0 ∼pX0 and are independent. Thus Theorem 2.2 is applicable in this setting; Figure 2(b) illustrates the accuracy of the prediction. 2.2.4 Low-rank Matrix Recovery Let X0 ∈Rd×d be an unknown matrix of rank r, in which case, x0 = vec(X0) with n = d2. Assume m/d2 = δ ∈(0, ∞) and r/d = ρ ∈(0, 1). As usual in this setting, we consider nuclearnorm regularization; in particular, we choose f(x) = √ d∥X∥∗. Each subgradient S ∈∂f(X) then satisfies ∥S∥F ≤d in agreement with assumption (b) of Section 2.1. Furthermore, for this choice of regularizer, we have 1 ne√n(f ∗)(n),c3 c1H + c2X0 = 1 2d2 min ∥V∥2≤ √ d ∥V −(c1H + c2X0)∥2 F = 1 2d min ∥V∥2≤1 ∥V −d−1/2(c1H + c2X0)∥2 F = 1 2d d X i=1 η2 si d−1/2(c1H + c2X0) ; 1 , where η(·; ·) is as in (13), si(·) denotes the ith singular value of its argument and H ∈Rd×d has entries N(0, 1). If conditions are met such that the empirical distribution of the singular values of (the sequence of random matrices) c1H + c2X0 converges asymptotically to a limiting distribution, say q(c1, c2), then F(c1, c2, c3) := 1 2Ex∼q(c1,c2) η2(x; 1) , and Theorem 2.1–2.2 apply. For instance, this will be the case if d−1/2X0 = USVt, where U, V unitary matrices and S is a diagonal matrix 7 whose entries have a given marginal distribution with bounded moments (in particular, independent of d). We leave the details and the problem of (numerically) evaluating F for future work. 2.3 An Application to q-bit Compressive Sensing 2.3.1 Setup Consider recovering a sparse unknown signal x0 ∈Rn from scalar q-bit quantized linear measurements. Let t := {t0 = 0, t1, . . . , tL−1, tL = +∞} represent a (symmetric with respect to 0) set of decision thresholds and ℓ:= {±ℓ1, ±ℓ2, . . . , ±ℓL} the corresponding representation points, such that L = 2q−1. Then, quantization of a real number x into q-bits can be represented as Qq(x, ℓ, t) = sign(x) L X i=1 ℓi1{ti−1≤|x|≤ti}, where 1S is the indicator function of a set S. For example, 1-bit quantization with level ℓcorresponds to Q1(x, ℓ) = ℓ· sign(x). The measurement vector y = [y1, y2 . . . , ym]T takes the form yi = Qq(aT i x0, ℓ, t), i = 1, 2, . . . , m, (16) where aT i ’s are the rows of a measurement matrix A ∈Rm×n, which is henceforth assumed i.i.d. standard Gaussian. We use the LASSO to obtain an estimate ˆx of x0 as ˆx := arg min x ∥y −Ax∥2 + λ∥x∥1. (17) Henceforth, we assume for simplicity that ∥x0∥2 = 1. Also, in our case, µ is known since g = Qq is known; thus, is reasonable to scale the solution of (17) as µ−1ˆx and consider the error quantity ∥µ−1ˆx −x0∥2 as a measure of estimation performance. Clearly, the error depends (besides others) on the number of bits q, on the choice of the decision thresholds t and on the quantization levels ℓ. An interesting question of practical importance becomes how to optimally choose these to achieve less error. As a running example for this section, we seek optimal quantization thresholds and corresponding levels (t∗, ℓ∗) = arg min t,ℓ∥µ−1ˆx −x0∥2, (18) while keeping all other parameters such as the number of bits q and of measurements m fixed. 2.3.2 Consequences of Precise Error Prediction Theorem 2.1 shows that ∥µ−1ˆx −x0∥2 = ∥ˆxlin −x0∥2, where ˆxlin is the solution to (17), but only, this time with a measurement vector ylin = Ax0 + σ µz, where µ, σ as in (20) and z has entries i.i.d. standard normal. Thus, lower values of the ratio σ2/µ2 correspond to lower values of the error and the design problem posed in (18) is equivalent to the following simplified one: (t∗, ℓ∗) = arg min t,ℓ σ2(t, ℓ) µ2(t, ℓ). (19) To be explicit, µ and σ2 above can be easily expressed from (3) after setting g = Qq as follows: µ := µ(ℓ, t) = r 2 π L X i=1 ℓi · e−t2 i−1/2 −e−t2 i /2 and σ2 := σ2(ℓ, t) := τ 2 −µ2, (20) where, τ 2 := τ 2(ℓ, t) = 2 L X i=1 ℓ2 i · (Q(ti−1) −Q(ti)) and Q(x) = 1 √ 2π Z ∞ x exp(−u2/2)du. 2.3.3 An Algorithm for Finding Optimal Quantization Levels and Thresholds In contrast to the initial problem in (18), the optimization involved in (19) is explicit in terms of the variables ℓand t, but, is still hard to solve in general. Interestingly, we show in Appendix C that the popular Lloyd-Max (LM) algorithm can be an effective algorithm for solving (19), since the values to which it converges are stationary points of the objective in (19). Note that this is not a directly obvious result since the classical objective of the LM algorithm is minimizing the quantity E[∥y −Ax0∥2 2] rather than E[∥µ−1ˆx −x0∥2 2]. 8 References [1] Francis R Bach. Structured sparsity-inducing norms through submodular functions. In Advances in Neural Information Processing Systems, pages 118–126, 2010. [2] Mohsen Bayati and Andrea Montanari. The lasso risk for gaussian matrices. Information Theory, IEEE Transactions on, 58(4):1997–2017, 2012. [3] Alexandre Belloni, Victor Chernozhukov, and Lie Wang. Square-root lasso: pivotal recovery of sparse signals via conic programming. Biometrika, 98(4):791–806, 2011. [4] David R. Brillinger. The identification of a particular nonlinear time series system. Biometrika, 64(3):509– 515, 1977. [5] David R Brillinger. A generalized linear model with” gaussian” regressor variables. A Festschrift For Erich L. Lehmann, page 97, 1982. [6] Venkat Chandrasekaran, Benjamin Recht, Pablo A Parrilo, and Alan S Willsky. The convex geometry of linear inverse problems. Foundations of Computational Mathematics, 12(6):805–849, 2012. [7] David L Donoho and Iain M Johnstone. Minimax risk overl p-balls forl p-error. Probability Theory and Related Fields, 99(2):277–303, 1994. [8] David L Donoho, Lain Johnstone, and Andrea Montanari. Accurate prediction of phase transitions in compressed sensing via a connection to minimax denoising. IEEE transactions on information theory, 59(6):3396–3433, 2013. [9] David L Donoho, Arian Maleki, and Andrea Montanari. Message-passing algorithms for compressed sensing. Proceedings of the National Academy of Sciences, 106(45):18914–18919, 2009. [10] David L Donoho, Arian Maleki, and Andrea Montanari. The noise-sensitivity phase transition in compressed sensing. Information Theory, IEEE Transactions on, 57(10):6920–6941, 2011. [11] Alexandra L Garnham and Luke A Prendergast. A note on least squares sensitivity in single-index model estimation and the benefits of response transformations. Electronic J. of Statistics, 7:1983–2004, 2013. [12] Yehoram Gordon. On Milman’s inequality and random subspaces which escape through a mesh in Rn. Springer, 1988. [13] Marwa El Halabi and Volkan Cevher. A totally unimodular view of structured sparsity. arXiv preprint arXiv:1411.1990, 2014. [14] Hidehiko Ichimura. Semiparametric least squares (sls) and weighted sls estimation of single-index models. Journal of Econometrics, 58(1):71–120, 1993. [15] Ker-Chau Li and Naihua Duan. Regression analysis under link violation. The Annals of Statistics, pages 1009–1052, 1989. [16] Samet Oymak, Christos Thrampoulidis, and Babak Hassibi. The squared-error of generalized lasso: A precise analysis. arXiv preprint arXiv:1311.0830, 2013. [17] Yaniv Plan and Roman Vershynin. The generalized lasso with non-linear observations. arXiv preprint arXiv:1502.04071, 2015. [18] Mihailo Stojnic. A framework to characterize performance of lasso algorithms. arXiv preprint arXiv:1303.7291, 2013. [19] Christos Thrampoulidis, Samet Oymak, and Babak Hassibi. Regularized linear regression: A precise analysis of the estimation error. In Proceedings of The 28th Conference on Learning Theory, pages 1683– 1709, 2015. [20] Christos Thrampoulidis, Ashkan Panahi, Daniel Guo, and Babak Hassibi. Precise error analysis of the lasso. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pages 3467–3471. [21] Christos Thrampoulidis, Ashkan Panahi, and Babak Hassibi. Asymptotically exact error analysis for the generalized ℓ2 2-lasso. In Information Theory (ISIT), 2015 IEEE International Symposium on, pages 2021–2025. IEEE, 2015. [22] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), pages 267–288, 1996. [23] Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67(1):91– 108, 2005. [24] Xinyang Yi, Zhaoran Wang, Constantine Caramanis, and Han Liu. Optimal linear estimation under unknown nonlinear transform. arXiv preprint arXiv:1505.03257, 2015. [25] Ming Yuan and Yi Lin. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 68(1):49–67, 2006. 9 | 2015 | 75 |
5,967 | Natural Neural Networks Guillaume Desjardins, Karen Simonyan, Razvan Pascanu, Koray Kavukcuoglu {gdesjardins,simonyan,razp,korayk}@google.com Google DeepMind, London Abstract We introduce Natural Neural Networks, a novel family of algorithms that speed up convergence by adapting their internal representation during training to improve conditioning of the Fisher matrix. In particular, we show a specific example that employs a simple and efficient reparametrization of the neural network weights by implicitly whitening the representation obtained at each layer, while preserving the feed-forward computation of the network. Such networks can be trained efficiently via the proposed Projected Natural Gradient Descent algorithm (PRONG), which amortizes the cost of these reparametrizations over many parameter updates and is closely related to the Mirror Descent online learning algorithm. We highlight the benefits of our method on both unsupervised and supervised learning tasks, and showcase its scalability by training on the large-scale ImageNet Challenge dataset. 1 Introduction Deep networks have proven extremely successful across a broad range of applications. While their deep and complex structure affords them a rich modeling capacity, it also creates complex dependencies between the parameters which can make learning difficult via first order stochastic gradient descent (SGD). As long as SGD remains the workhorse of deep learning, our ability to extract highlevel representations from data may be hindered by difficult optimization, as evidenced by the boost in performance offered by batch normalization (BN) [7] on the Inception architecture [25]. Though its adoption remains limited, the natural gradient [1] appears ideally suited to these difficult optimization issues. By following the direction of steepest descent on the probabilistic manifold, the natural gradient can make constant progress over the course of optimization, as measured by the Kullback-Leibler (KL) divergence between consecutive iterates. Utilizing the proper distance measure ensures that the natural gradient is invariant to the parametrization of the model. Unfortunately, its application has been limited due to its high computational cost. Natural gradient descent (NGD) typically requires an estimate of the Fisher Information Matrix (FIM) which is square in the number of parameters, and worse, it requires computing its inverse. Truncated Newton methods can avoid explicitly forming the FIM in memory [12, 15], but they require an expensive iterative procedure to compute the inverse. Such computations can be wasteful as they do not take into account the highly structured nature of deep models. Inspired by recent work on model reparametrizations [17, 13], our approach starts with a simple question: can we devise a neural network architecture whose Fisher is constrained to be identity? This is an important question, as SGD and NGD would be equivalent in the resulting model. The main contribution of this paper is in providing a simple, theoretically justified network reparametrization which approximates via first-order gradient descent, a block-diagonal natural gradient update over layers. Our method is computationally efficient due to the local nature of the reparametrization, based on whitening, and the amortized nature of the algorithm. Our second contribution is in unifying many heuristics commonly used for training neural networks, under the roof of the natural gradient, while highlighting an important connection between model reparametrizations and Mirror Descent [3]. Finally, we showcase the efficiency and the scalability of our method 1 across a broad-range of experiments, scaling our method from standard deep auto-encoders to large convolutional models on ImageNet[20], trained across multiple GPUs. This is to our knowledge the first-time a (non-diagonal) natural gradient algorithm is scaled to problems of this magnitude. 2 The Natural Gradient This section provides the necessary background and derives a particular form of the FIM whose structure will be key to our efficient approximation. While we tailor the development of our method to the classification setting, our approach generalizes to regression and density estimation. 2.1 Overview We consider the problem of fitting the parameters ✓2 RN of a model p(y | x; ✓) to an empirical distribution ⇡(x, y) under the log-loss. We denote by x 2 X the observation vector and y 2 Y its associated label. Concretely, this stochastic optimization problem aims to solve: ✓⇤ 2 argmin✓E(x,y)⇠⇡[−log p(y | x, ✓)] . (1) Defining the per-example loss as `(x, y), Stochastic Gradient Descent (SGD) performs the above minimization by iteratively following the direction of steepest descent, given by the column vector r = E⇡[d`/d✓]. Parameters are updated using the rule ✓(t+1) ✓(t) −↵(t)r(t), where ↵is a learning rate. An equivalent proximal form of gradient descent [4] reveals the precise nature of ↵: ✓(t+1) = argmin✓ ⇢ h✓, ri + 1 2↵(t) """✓−✓(t)""" 2 2 # (2) Namely, each iterate ✓(t+1) is the solution to an auxiliary optimization problem, where ↵controls the distance between consecutive iterates, using an L2 distance. In contrast, the natural gradient relies on the KL-divergence between iterates, a more appropriate distance measure for probability distributions. Its metric is determined by the Fisher Information matrix, F✓= Ex⇠⇡ ( Ey⇠p(y|x,✓) "✓@ log p @✓ ◆✓@ log p @✓ ◆T #) , (3) i.e. the covariance of the gradients of the model log-probabilities wrt. its parameters. The natural gradient direction is then obtained as rN = F −1 ✓ r. See [15, 14] for a recent overview of the topic. 2.2 Fisher Information Matrix for MLPs We start by deriving the precise form of the Fisher for a canonical multi-layer perceptron (MLP) composed of L layers. We consider the following deep network for binary classification, though our approach generalizes to an arbitrary number of output classes. p(y = 1 | x) ⌘hL = fL(WLhL−1 + bL) (4) · · · h1 = f1 (W1x + b1) The parameters of the MLP, denoted ✓= {W1, b1, · · · , WL, bL}, are the weights Wi 2 RNi⇥Ni−1 connecting layers i and i −1, and the biases bi 2 RNi. fi is an element-wise non-linear function. Let us define δi to be the backpropagated gradient through the i-th non-linearity. We ignore the off block-diagonal components of the Fisher matrix and focus on the block FWi, corresponding to interactions between parameters of layer i. This block takes the form: FWi = Ex⇠⇡ y⇠p h vec + δihT i−1 , vec + δihT i−t ,T i , where vec(X) is the vectorization function yielding a column vector from the rows of matrix X. Assuming that δi and activations hi−1 are independent random variables, we can write: FWi(km, ln) ⇡Ex⇠⇡ y⇠p [δi(k)δi(l)] E⇡[hi−1(m)hi−1(n)] , (5) 2 *RRJOHFRQILGHQWLDODQGSURSULHWDU\ ✓t ✓t+T ⌦t ⌦t+1 ⌦t+T F(✓t) 1 2 F(✓t)−1 2 Figure 1: (a) A 2-layer natural neural network. (b) Illustration of the projections involved in PRONG. where X(i, j) is the element at row i and column j of matrix X and x(i) is the i-th element of vector x. FWi(km, ln) is the entry in the Fisher capturing interactions between parameters Wi(k, m) and Wj(l, n). Our hypothesis, verified experimentally in Sec. 4.1, is that we can greatly improve conditioning of the Fisher by enforcing that E⇡ ⇥ hihT i ⇤ = I, for all layers of the network, despite ignoring possible correlations in the δ’s and off block diagonal terms of the Fisher. 3 Projected Natural Gradient Descent This section introduces Whitened Neural Networks (WNN), which perform approximate whitening of their internal representations. We begin by presenting a novel whitened neural layer, with the assumption that the network statistics µi(✓) = E[hi] and ⌃i(✓) = E[hihT i ] are fixed. We then show how these layers can be adapted to efficiently track population statistics over the course of training. The resulting learning algorithm is referred to as Projected Natural Gradient Descent (PRONG). We highlight an interesting connection between PRONG and Mirror Descent in Section 3.3. 3.1 A Whitened Neural Layer The building block of WNN is the following neural layer, hi = fi (ViUi−1 (hi−1 −ci−1) + di) . (6) Compared to Eq. 4, we have introduced an explicit centering parameter ci−1 2 RNi−1, equal to µi−1, which ensures that the input to the dot product has zero mean in expectation. This is analogous to the centering reparametrization for Deep Boltzmann Machines [13]. The weight matrix Ui−1 2 RNi−1⇥Ni−1 is a per-layer PCA-whitening matrix whose rows are obtained from an eigendecomposition of ⌃i−1: ⌃i = ˜Ui · diag (λi) · ˜U T i =) Ui = diag (λi + ✏)−1 2 · ˜U T i . (7) The hyper-parameter ✏is a regularization term controlling the maximal multiplier on the learning rate, or equivalently the size of the trust region. The parameters Vi 2 RNi⇥Ni−1 and di 2 RNi are analogous to the canonical parameters of a neural network as introduced in Eq. 4, though operate in the space of whitened unit activations Ui(hi −ci). This layer can be stacked to form a deep neural network having L layers, with model parameters ⌦= {V1, d1, · · · VL, dL} and whitening coefficients Φ = {U0, c0, · · · , UL−1, cL−1}, as depicted in Fig. 1a. Though the above layer might appear over-parametrized at first glance, we crucially do not learn the whitening coefficients via loss minimization, but instead estimate them directly from the model statistics. These coefficients are thus constants from the point of view of the optimizer and simply serve to improve conditioning of the Fisher with respect to the parameters ⌦, denoted F⌦. Indeed, using the same derivation that led to Eq. 5, we can see that the block-diagonal terms of F⌦now involve terms E ⇥ (Uihi)(Uihi)T ⇤ , which equals identity by construction. 3.2 Updating the Whitening Coefficients As the whitened model parameters ⌦evolve during training, so do the statistics µi and ⌃i. For our model to remain well conditioned, the whitening coefficients must be updated at regular intervals, 3 Algorithm 1 Projected Natural Gradient Descent 1: Input: training set D, initial parameters ✓. 2: Hyper-parameters: reparam. frequency T, number of samples Ns, regularization term ✏. 3: Ui I; ci 0; t 0 4: repeat 5: if mod(t, T) = 0 then . amortize cost of lines [6-11] 6: for all layers i do 7: Compute canonical parameters Wi = ViUi−1; bi = di −Wici−1. . proj. P −1 Φ (⌦) 8: Estimate µi and ⌃i, using Ns samples from D. 9: Update ci from µi and Ui from eigen decomp. of ⌃i + ✏I. . update Φ 10: Update parameters Vi WiU −1 i−1; di bi + ViUi−1ci−1. . proj. PΦ(✓) 11: end for 12: end if 13: Perform SGD update wrt. ⌦using samples from D. 14: t t + 1 15: until convergence while taking care not to interfere with the convergence properties of gradient descent. This can be achieved by coupling updates to Φ with corresponding updates to ⌦such that the overall function implemented by the MLP remains unchanged, e.g. by preserving the product ViUi−1 before and after each update to the whitening coefficients (with an analoguous constraint on the biases). Unfortunately, while estimating the mean µi and diag(⌃i) could be performed online over a minibatch of samples as in the recent Batch Normalization scheme [7], estimating the full covariance matrix will undoubtedly require a larger number of samples. While statistics could be accumulated online via an exponential moving average as in RMSprop [27] or K-FAC [8], the cost of the eigendecomposition required for computing the whitening matrix Ui remains cubic in the layer size. In the simplest instantiation of our method, we exploit the smoothness of gradient descent by simply amortizing the cost of these operations over T consecutive updates. SGD updates in the whitened model will be closely aligned to NGD immediately following the reparametrization. The quality of this approximation will degrade over time, until the subsequent reparametrization. The resulting algorithm is shown in the pseudo-code of Algorithm 1. We can improve upon this basic amortization scheme by updating the whitened parameters ⌦using a per-batch diagonal natural gradient update, whose statistics are computed online. In our framework, this can be implemented via the reparametrization Wi = ViDi−1Ui−1, where Di−1 is a diagonal matrix updated such that V [Di−1Ui−1hi−1] = 1, for each minibatch. Updates to Di−1 can be compensated for exactly and cheaply by scaling the rows of Ui−1 and columns of Vi accordingly. A simpler implementation of this idea is to combine PRONG with batch-normalization, which we denote as PRONG+. 3.3 Duality and Mirror Descent There is an inherent duality between the parameters ⌦of our whitened neural layer and the parameters ✓of a canonical model. Indeed, there exist linear projections PΦ(✓) and P −1 Φ (⌦), which map from canonical parameters ✓to whitened parameters ⌦, and vice-versa. Pφ(✓) corresponds to line 10 of Algorithm 1, while P −1 Φ (⌦) corresponds to line 7. This duality between ✓and ⌦reveals a close connection between PRONG and Mirror Descent [3]. Mirror Descent (MD) is an online learning algorithm which generalizes the proximal form of gradient descent to the class of Bregman divergences B (q, p), where q, p 2 Γ and : Γ ! R is a strictly convex and differentiable function. Replacing the L2 distance by B , mirror descent solves the proximal problem of Eq. 2 by applying first-order updates in a dual space and then projecting back onto the primal space. Defining ⌦= r✓ (✓) and ✓= r⇤ ⌦ (⌦), with ⇤the complex conjugate of , the mirror descent updates are given by: ⌦(t+1) = r✓ ⇣ ✓(t)⌘ −↵(t)r✓ (8) ✓(t+1) = r⌦ ⇤⇣ ⌦(t+1)⌘ (9) 4 (a) (b) (c) Figure 2: Fisher matrix for a small MLP (a) before and (b) after the first reparametrization. Best viewed in colour. (c) Condition number of the FIM during training, relative to the initial conditioning. All models where initialized such that the initial conditioning was the same, and learning rate where adjusted such that they reach roughly the same training error in the given time. It is well known [26, 18] that the natural gradient is a special case of MD, where the distance generating function 1 is chosen to be (✓) = 1 2✓T F✓. The mirror updates are somewhat unintuitive however. Why is the gradient r✓applied to the dual space if it has been computed in the space of parameters ✓? This is where PRONG relates to MD. It is trivial to show that using the function ˜ (✓) = 1 2✓T p F✓, instead of the previously defined (✓), enables us to directly update the dual parameters using r⌦, the gradient computed directly in the dual space. Indeed, the resulting updates can be shown to implement the natural gradient and are thus equivalent to the updates of Eq. 9 with the appropriate choice of (✓): ˜⌦(t+1) = r✓˜ ⇣ ✓(t)⌘ −↵(t)r⌦= F 1 2 ✓(t) −↵(t)E⇡ d` d✓F −1 2 3 ˜✓(t+1) = r⌦˜ ⇤⇣ ˜⌦(t+1)⌘ = ✓(t) −↵(t)F −1E⇡ d` d✓ 3 (10) The operators ˜r and ˜r ⇤correspond to the projections PΦ(✓) and P −1 Φ (⌦) used by PRONG to map from the canonical neural parameters ✓to those of the whitened layers ⌦. As illustrated in Fig. 1b, the advantage of this whitened form of MD is that one may amortize the cost of the projections over several updates, as gradients can be computed directly in the dual parameter space. 3.4 Related Work This work extends the recent contributions of [17] in formalizing many commonly used heuristics for training MLPs: the importance of zero-mean activations and gradients [10, 21], as well as the importance of normalized variances in the forward and backward passes [10, 21, 6]. More recently, Vatanen et al. [28] extended their previous work [17] by introducing a multiplicative constant γi to the centered non-linearity. In contrast, we introduce a full whitening matrix Ui and focus on whitening the feedforward network activations, instead of normalizing a geometric mean over units and gradient variances. The recently introduced batch normalization (BN) scheme [7] quite closely resembles a diagonal version of PRONG, the main difference being that BN normalizes the variance of activations before the non-linearity, as opposed to normalizing the latent activations by looking at the full covariance. Furthermore, BN implements normalization by modifying the feed-forward computations thus requiring the method to backpropagate through the normalization operator. A diagonal version of PRONG also bares an interesting resemblance to RMSprop [27, 5], in that both normalization terms involve the square root of the FIM. An important distinction however is that PRONG applies this update in the whitened parameter space, thus preserving the natural gradient interpretation. 1As the Fisher and thus ✓depend on the parameters ✓(t), these should be indexed with a time superscript, which we drop for clarity. 5 (a) (b) (c) (d) Figure 3: Optimizing a deep auto-encoder on MNIST. (a) Impact of eigenvalue regularization term ✏. (b) Impact of amortization period T showing that initialization with the whitening reparametrization is important for achieving faster learning and better error rate. (c) Training error vs number of updates. (d) Training error vs cpu-time. Plots (c-d) show that PRONG achieves better error rate both in number of updates and wall clock. K-FAC [8] is closely related to PRONG and was developed concurrently to our method. It targets the same layer-wise block-diagonal of the Fisher, approximating each block as in Eq. 5. Unlike our method however, KFAC does not approximate the covariance of backpropagated gradients as the identity, and further estimates the required statistics using exponential moving averages (unlike our approach based on amortization). Similar techniques can be found in the preconditioning of the Kaldi speech recognition toolkit [16]. By modeling the Fisher matrix as the covariance of a sparsely connected Gaussian graphical model, FANG [19] represents a general formalism for exploiting model structure to efficiently compute the natural gradient. One application to neural networks [8] is in decorrelating gradients across neighbouring layers. A similar algorithm to PRONG was later found in [23], where it appeared simply as a thought experiment, but with no amortization or recourse for efficiently computing F. 4 Experiments We begin with a set of diagnostic experiments which highlight the effectiveness of our method at improving conditioning. We also illustrate the impact of the hyper-parameters T and ✏, controlling the frequency of the reparametrization and the size of the trust region. Section 4.2 evaluates PRONG on unsupervised learning problems, where models are both deep and fully connected. Section 4.3 then moves onto large convolutional models for image classification. Experimental details such as model architecture or hyper-parameter configurations can be found in the supplemental material. 4.1 Introspective Experiments Conditioning. To provide a better understanding of the approximation made by PRONG, we train a small 3-layer MLP with tanh non-linearities, on a downsampled version of MNIST (10x10) [11]. The model size was chosen in order for the full Fisher to be tractable. Fig. 2(a-b) shows the FIM of the middle hidden layers before and after whitening the model activations (we took the absolute value of the entries to improve visibility). Fig. 2c depicts the evolution of the condition number of the FIM during training, measured as a percentage of its initial value (before the first whitening reparametrization in the case of PRONG). We present such curves for SGD, RMSprop, batch normalization and PRONG. The results clearly show that the reparametrization performed by PRONG improves conditioning (reduction of more than 95%). These observations confirm our initial assumption, namely that we can improve conditioning of the block diagonal Fisher by whitening activations alone. Sensitivity of Hyper-Parameters. Figures 3a- 3b highlight the effect of the eigenvalue regularization term ✏and the reparametrization interval T. The experiments were performed on the best 6 (a) (b) (c) (d) Figure 4: Classification error on CIFAR-10 (a-b) and ImageNet (c-d). On CIFAR-10, PRONG achieves better test error and converges faster. On ImageNet, PRONG+ achieves comparable validation error while maintaining a faster covergence rate. performing auto-encoder of Section 4.2 on the MNIST dataset. Figures 3a- 3b plot the reconstruction error on the training set for various values of ✏and T. As ✏determines a maximum multiplier on the learning rate, learning becomes extremely sensitive when this learning rate is high2. For smaller step sizes however, lowering ✏can yield significant speedups often converging faster than simply using a larger learning rate. This confirms the importance of the manifold curvature for optimization (lower ✏allows for different directions to be scaled drastically different according to their corresponding curvature). Fig 3b compares the impact of T for models having a proper whitened initialization (solid lines), to models being initialized with a standard “fan-in” initialization (dashed lines) [10]. These results are quite surprising in showing the effectiveness of the whitening reparametrization as a simple initialization scheme. That being said, performance can degrade due to ill conditioning when T becomes excessively large (T = 105). 4.2 Unsupervised Learning Following Martens [12], we compare PRONG on the task of minimizing reconstruction error of a dense 8-layer auto-encoder on the MNIST dataset. Reconstruction error with respect to updates and wallclock time are shown in Fig. 3 (c,d). We can see that PRONG significantly outperforms the baseline methods, by up to an order of magnitude in number of updates. With respect to wallclock, our method significantly outperforms the baselines in terms of time taken to reach a certain error threshold, despite the fact that the runtime per epoch for PRONG was 3.2x that of SGD, compared to batch normalization (2.3x SGD) and RMSprop (9x SGD). Note that these timing numbers reflect performance under the optimal choice of hyper-parameters, which in the case of batch normalization yielded a batch size of 256, compared to 128 for all other methods. Further breaking down the performance, 34% of the runtime of PRONG was spent performing the whitening reparametrization, compared to 4% for estimating the per layer means and covariances. This confirms that amortization is paramount to the success of our method.3 4.3 Supervised Learning We now evaluate our method for training deep supervised convolutional networks for object recognition. Following [7], we perform whitening across feature maps only: that is we treat pixels in a given feature map as independent samples. This allows us to implement the whitened neural layer as a sequence of two convolutions, where the first is by a 1x1 whitening filter. PRONG is compared to SGD, RMSprop and batch normalization, with each algorithm being accelerated via momentum. Results are presented on CIFAR-10 [9] and the ImageNet Challenge (ILSVRC12) datasets [20]. In both cases, learning rates were decreased using a “waterfall” annealing schedule, which divided the learning rate by 10 when the validation error failed to improve after a set number of evaluations. 2Unstable combinations of learning rates and ✏are omitted for clarity. 3We note that our whitening implementation is not optimized, as it does not take advantage of GPU acceleration. Runtime is therefore expected to improve as we move the eigen-decompositions to GPU. 7 CIFAR-10 We now evaluate PRONG on CIFAR-10, using a deep convolutional model inspired by the VGG architecture [22]. The model was trained on 24 ⇥24 random crops with random horizontal reflections. Model selection was performed on a held-out validation set of 5k examples. Results are shown in Fig. 4. With respect to training error, PRONG and BN seem to offer similar speedups compared to SGD with momentum. Our hypothesis is that the benefits of PRONG are more pronounced for densely connected networks, where the number of units per layer is typically larger than the number of maps used in convolutional networks. Interestingly, PRONG generalized better, achieving 7.32% test error vs. 8.22% for batch normalization. This reflects the findings of [15], which showed how NGD can leverage unlabeled data for better generalization: the “unlabeled” data here comes from the extra crops and reflections observed when estimating the whitening matrices. ImageNet Challenge Dataset Our final set of experiments aims to show the scalability of our method. We applied our natural gradient algorithm to the large-scale ILSVRC12 dataset (1.3M images labelled into 1000 categories) using the Inception architecture [7]. In order to scale to problems of this size, we parallelized our training loop so as to split the processing of a single minibatch (of size 256) across multiple GPUs. Note that PRONG can scale well in this setting, as the estimation of the mean and covariance parameters of each layer is also embarassingly parallel. Eight GPUs were used for computing gradients and estimating model statistics, though the eigen decomposition required for whitening was itself not parallelized in the current implementation. Given the difficulty of the task, we employed the enhanced version of the algorithm (PRONG+), as simple periodic whitening of the model proved to be unstable. Figure 4 (c-d) shows that batch normalisation and PRONG+ converge to approximately the same top-1 validation error (28.6% vs 28.9% respectively) for similar cpu-time. In comparison, SGD achieved a validation error of 32.1%. PRONG+ however exhibits much faster convergence initially: after 105 updates it obtains around 36% error compared to 46% for BN alone. We stress that the ImageNet results are somewhat preliminary. While our top-1 error is higher than reported in [7] (25.2%), we used a much less extensive data augmentation pipeline. We are only beginning to explore what natural gradient methods may achieve on these large scale optimization problems and are encouraged by these initial findings. 5 Discussion We began this paper by asking whether convergence speed could be improved by simple model reparametrizations, driven by the structure of the Fisher matrix. From a theoretical and experimental perspective, we have shown that Whitened Neural Networks can achieve this via a simple, scalable and efficient whitening reparametrization. They are however one of several possible instantiations of the concept of Natural Neural Networks. In a previous incarnation of the idea, we exploited a similar reparametrization to include whitening of backpropagated gradients4. We favor the simpler approach presented in this paper, as we generally found the alternative less stable for deep networks. This may be due to the difficulty in estimating gradient covariances in lower layers, a problem which seems to mirror the famous vanishing gradient problem. [17]. Maintaining whitened activations may also offer additional benefits from the point of view of model compression and generalization. By virtue of whitening, the projection Uihi forms an ordered representation, having least and most significant bits. The sharp roll-off in the eigenspectrum of ⌃i may explain why deep networks are ammenable to compression [2]. Similarly, one could envision spectral versions of Dropout [24] where the dropout probability is a function of the eigenvalues. Alternative ways of orthogonalizing the representation at each layer should also be explored, via alternate decompositions of ⌃i, or perhaps by exploiting the connection between linear auto-encoders and PCA. We also plan on pursuing the connection with Mirror Descent and further bridging the gap between deep learning and methods from online convex optimization. Acknowledgments We are extremely grateful to Shakir Mohamed for invaluable discussions and feedback in the preparation of this manuscript. We also thank Philip Thomas, Volodymyr Mnih, Raia Hadsell, Sergey Ioffe and Shane Legg for feedback on the paper. 4The weight matrix can be parametrized as Wi = RT i ViUi−1, with Ri the whitening matrix for δi. 8 References [1] Shun-ichi Amari. Natural gradient works efficiently in learning. Neural Computation, 1998. [2] Jimmy Ba and Rich Caruana. Do deep nets really need to be deep? In NIPS. 2014. [3] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett., 2003. [4] P. L. Combettes and J.-C. Pesquet. Proximal Splitting Methods in Signal Processing. ArXiv e-prints, December 2009. [5] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. In JMLR. 2011. [6] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In AISTATS, May 2010. [7] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015. [8] Roger Grosse James Martens. Optimizing neural networks with kronecker-factored approximate curvature. In ICML, June 2015. [9] Alex Krizhevsky. Learning multiple layers of features from tiny images. Master’s thesis, University of Toronto, 2009. [10] Yann LeCun, L´eon Bottou, Genevieve B. Orr, and Klaus-Robert M¨uller. Efficient backprop. In Neural Networks, Tricks of the Trade, Lecture Notes in Computer Science LNCS 1524. Springer Verlag, 1998. [11] Yann Lecun, Lon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. In Proceedings of the IEEE, pages 2278–2324, 1998. [12] James Martens. Deep learning via Hessian-free optimization. In ICML, June 2010. [13] K.-R. M¨uller and G. Montavon. Deep boltzmann machines and the centering trick. In K.-R. M¨uller, G. Montavon, and G. B. Orr, editors, Neural Networks: Tricks of the Trade. Springer, 2013. [14] Yann Ollivier. Riemannian metrics for neural networks. arXiv, abs/1303.0818, 2013. [15] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. In ICLR, 2014. [16] Daniel Povey, Xiaohui Zhang, and Sanjeev Khudanpur. Parallel training of deep neural networks with natural gradient and parameter averaging. ICLR workshop, 2015. [17] T. Raiko, H. Valpola, and Y. LeCun. Deep learning made easier by linear transformations in perceptrons. In AISTATS, 2012. [18] G. Raskutti and S. Mukherjee. The Information Geometry of Mirror Descent. arXiv, October 2013. [19] Ruslan Salakhutdinov Roger B. Grosse. Scaling up natural gradient by sparsely factorizing the inverse fisher matrix. In ICML, June 2015. [20] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 2015. [21] Nicol N. Schraudolph. Accelerated gradient descent by factor-centering decomposition. Technical Report IDSIA-33-98, Istituto Dalle Molle di Studi sull’Intelligenza Artificiale, 1998. [22] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. [23] Jascha Sohl-Dickstein. The natural gradient by analogy to signal whitening, and recipes and tricks for its use. arXiv, 2012. [24] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 2014. [25] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. arXiv, 2014. [26] Philip S Thomas, William C Dabney, Stephen Giguere, and Sridhar Mahadevan. Projected natural actorcritic. In Advances in Neural Information Processing Systems 26. 2013. [27] Tijmen Tieleman and Geoffrey Hinton. Rmsprop: Divide the gradient by a running average of its recent magnitude. coursera: Neural networks for machine learning. 2012. [28] Tommi Vatanen, Tapani Raiko, Harri Valpola, and Yann LeCun. Pushing stochastic gradient towards second-order methods – backpropagation learning with transformations in nonlinearities. ICONIP, 2013. 9 | 2015 | 76 |
5,968 | Scalable Adaptation of State Complexity for Nonparametric Hidden Markov Models Michael C. Hughes, William Stephenson, and Erik B. Sudderth Department of Computer Science, Brown University, Providence, RI 02912 mhughes@cs.brown.edu, wtstephe@gmail.com, sudderth@cs.brown.edu Abstract Bayesian nonparametric hidden Markov models are typically learned via fixed truncations of the infinite state space or local Monte Carlo proposals that make small changes to the state space. We develop an inference algorithm for the sticky hierarchical Dirichlet process hidden Markov model that scales to big datasets by processing a few sequences at a time yet allows rapid adaptation of the state space cardinality. Unlike previous point-estimate methods, our novel variational bound penalizes redundant or irrelevant states and thus enables optimization of the state space. Our birth proposals use observed data statistics to create useful new states that escape local optima. Merge and delete proposals remove ineffective states to yield simpler models with more affordable future computations. Experiments on speaker diarization, motion capture, and epigenetic chromatin datasets discover models that are more compact, more interpretable, and better aligned to ground truth segmentations than competitors. We have released an open-source Python implementation which can parallelize local inference steps across sequences. 1 Introduction The hidden Markov model (HMM) [1, 2] is widely used to segment sequential data into interpretable discrete states. Human activity streams might use walking or dancing states, while DNA transcription might be understood via promotor or repressor states [3]. The hierarchical Dirichlet process HMM (HDP-HMM) [4, 5, 6] provides an elegant Bayesian nonparametric framework for reasoning about possible data segmentations with different numbers of states. Existing inference algorithms for HMMs and HDP-HMMs have numerous shortcomings: they cannot efficiently learn from large datasets, do not effectively explore segmentations with varying numbers of states, and are often trapped at local optima near their initialization. Stochastic optimization methods [7, 8] are particularly vulnerable to these last two issues, since they cannot change the number of states instantiated during execution. The importance of removing irrelevant states has been long recognized [9]. Samplers that add or remove states via split and merge moves have been developed for HDP topic models [10, 11] and beta process HMMs [12]. However, these Monte Carlo proposals use the entire dataset and require all sequences to fit in memory, limiting scalability. We propose an HDP-HMM learning algorithm that reliably transforms an uninformative, single-state initialization into an accurate yet compact set of states. Generalizing previous work on memoized variational inference for DP mixture models [13] and HDP topic models [14], we derive a variational bound for the HDP-HMM that accounts for sticky state persistence and can be used for effective Bayesian model selection. Our algorithm uses birth proposal moves to create new states and merge and delete moves to remove states with poor predictive power. State space adaptations are validated via a global variational bound, but by caching sufficient statistics our memoized algorithm efficiently processes subsets of sequences at each step. Extensive experiments demonstrate the reliability and scalability of our approach, which can be reproduced via Python code we have released online1. 1http://bitbucket.org/michaelchughes/x-hdphmm-nips2015/ 1 ! "!! #!! $!! %!! ! "!! #!! $!! %!! ! "!! #!! $!! %!! ! "!! #!! $!! %!! ! "!! #!! $!! %!! (A) Initialization, K=1 (B) After first lap births, K=47 ! "!! #!! $!! %!! $!! %!! (E) After 100 laps, K=31 (F) Ground truth labels, K=12 accepted merge pairs (C) After first lap merges, K=37 (D) After second lap, K=56 accepted birth Figure 1: Illustration of our new birth/merge/delete variational algorithm as it learns to segment motion capture sequences into common exercise types (Sec. 5). Each panel shows segmentations of the same 6 sequences, with time on the horizontal axis. Starting from just one state (A), birth moves at the first sequence create useful states. Local updates to each sequence in turn can use existing states or birth new ones (B). After all sequences are updated once, we perform merge moves to clean up and lap is complete (C). After another complete lap of birth updates at each sequence followed by merges and deletes, the segmentation is further refined (D). After many laps, our final segmentation (E) aligns well to labels from a human annotator (F), with some true states aligning to multiple learned states that capture subject-specific variability in exercises. 2 Hierarchical Dirichlet Process Hidden Markov Models We wish to jointly model N sequences, where sequence n has data xn = [xn1, xn2, . . . , xnTn] and observation xnt is a vector representing interval or timestep t. For example, xnt ∈RD could be the spectrogram for an instant of audio, or human limb positions during a 100ms interval. The HDP-HMM explains this data by assigning each observation xnt to a single hidden state znt. The chosen state comes from a countably infinite set of options k ∈{1, 2, . . .}, generated via Markovian dynamics with initial state distributions π0 and transition distributions {πk}∞ k=1: p(zn1 = k) = π0k, p(znt = ℓ| zn,t−1 = k) = πkℓ. (1) We draw data xnt given assigned state znt = k from an exponential family likelihood F: F : log p(xnt | φk) = sF (xdn)T φk + cF (φk), H : log p(φk | ¯τ) = φT k ¯τ + cH(¯τ). (2) The natural parameter φk for each state has conjugate prior H. Cumulant functions cF , cH ensure these distributions are normalized. The chosen exponential family is defined by its sufficient statistics sF . Our experiments consider Bernoulli, Gaussian, and auto-regressive Gaussian likelihoods. Hierarchies of Dirichlet processes. Under the HDP-HMM prior and posterior, the number of states is unbounded; it is possible that every observation comes from a unique state. The hierarchical Dirichlet process (HDP) [5] encourages sharing states over time via a latent root probability vector β over the infinite set of states (see Fig. 2). The stick-breaking representation of the prior on β first draws independent variables uk ∼Beta(1, γ) for each state k, and then sets βk = uk Qk−1 ℓ=1 (1−uℓ). We interpret uk as the conditional probability of choosing state k among states {k, k+1, k+2, . . .}. In expectation, the K most common states are first in stick-breaking order. We represent their probabilities via the vector [β1 β2 . . . βK β>K], where β>K = P∞ k=K+1 βk. Given this (K + 1)dimensional probability vector β, the HDP-HMM generates transition distributions πk for each state k from a Dirichlet with mean equal to β and variance governed by concentration parameter α > 0: [πk1 . . . πkK πk>K] ∼Dir(αβ1, αβ2, . . . , αβ>K). (3) We draw starting probability vector π0 from a similar prior with much smaller variance, π0 ∼ Dir(α0β) with α0 ≫α, because few starting states are observed. Sticky self-transition bias. In many applications, we expect each segment to persist for many timesteps. The “sticky” parameterization of [4, 6] favors self-transition by placing extra prior mass on the transition probability πkk. In particular, [πk1 . . . πk>K] ∼Dir(αβ1, . . . αβk + κ, . . . αβ>K) where κ > 0 controls the degree of self-transition bias. Choosing κ ≈100 leads to long segment lengths, while avoiding the computational cost of semi-Markov alternatives [7]. 2 πk zn1 zn2 znT xnT xn1 xn2 · · · · · · N uk ∞ φk ∞ ∞ ˆsn1 ˆsn2 ˆsnT -1 ˆτk ˆθk ˆρk ˆωk 0.0 0.5 1.0 1.5 2.0 alpha −200 −150 −100 −50 cD sticky lower bound 0 5 10 15 20 25 30 num states K −120 −100 −80 −60 −40 −20 cD sticky lower bound Figure 2: Left: Graphical representation of the HDP hidden Markov model. Variational parameters are shown in red. Center: Our surrogate bound for the sticky Dirichlet cumulant function cD (Eq. 9) as a function of α, computed with κ = 100 and uniform β with K = 20 active states. Right: Surrogate bound vs. K, with fixed κ = 100, α = 0.5. This bound remains tight when our state adaptation moves insert or remove states. 3 Memoized and Stochastic Variational Inference After observing data x, our inferential goal is posterior knowledge of top-level conditional probabilities u, HMM parameters π, φ, and assignments z. We refer to u, π, φ as global parameters because they generalize to new data sequences. In contrast, the states zn are local to a specific sequence xn. 3.1 A Factorized Variational Lower Bound We seek a distribution q over the unobserved variables that is close to the true posterior, but lies in the simpler factorized family q(·) ≜q(u)q(φ)q(π)q(z). Each factor has exponential family form with free parameters denoted by hats, and our inference algorithms update these parameters to minimize the Kullback-Leibler (KL) divergence KL(q || p). Our chosen factorization for q is similar to [7], but includes a substantially more accurate approximation to q(u) as detailed in Sec. 3.2. Factor q(z). For each sequence n, we use an independent factor q(zn) with Markovian structure: q(zn) ≜ " K Y k=1 ˆrδk(zn1) n1k # T −1 Y t=1 K Y k=1 K Y ℓ=1 ˆsntkℓ ˆrntk δk(znt)δℓ(zn,t+1) (4) Free parameter vector ˆsnt defines the joint assignment probabilities ˆsntkℓ≜q(zn,t+1 = ℓ, znt = k), so the K2 non-negativeentries of ˆsnt sum to one. The parameter ˆrnt defines the marginal probability ˆrntk = q(znt = k), and equals ˆrntk = PK ℓ=1 ˆsntkℓ. We can find the expected count of transitions from state k to ℓacross all sequences via the sufficient statistic Mkℓ(ˆs) ≜PN n=1 PTn−1 t=1 ˆsntkℓ. The truncation level K limits the total number of states to which data is assigned. Under our approximate posterior, only q(zn) is constrained by this choice; no global factors are truncated. Indeed, if data is only assigned to the first K states, the conditional independence properties of the HDP-HMM imply that {φk, uk | k > K} are independent of the data. Their optimal variational posteriors thus match the prior, and need not be explicitly computed or stored [15, 16]. Simple variational algorithms treat K as a fixed constant [7], but Sec. 4 develops novel algorithms that fit K to data. Factor q(π). For the starting state (k = 0) and each state k ∈1, 2, . . ., we define q(πk) as a Dirichlet distribution: q(πk) ≜Dir(ˆθk1, . . . , ˆθkK, ˆθk>K). Free parameter ˆθk is a vector of K + 1 positive numbers, with one entry for each of the K active states and a final entry for the aggregate mass of all other states. The expected log transition probability between states k and ℓ, Pkℓ(ˆθ) ≜ Eq[log πkℓ] = ψ(ˆθkℓ) −ψ(PK+1 m=1 ˆθkm), is a key sufficient statistic. Factor q(φ). Emission parameter φk for state k has factor q(φk) ≜H(ˆτk) conjugate to the likelihood F. The supplement provides details for Bernoulli, Gaussian, and auto-regressive F. We score the approximation q via an objective function L that assigns a scalar value (higher is better) to each possible input of free parameters, data x, and hyperparameters γ, α, κ, ¯τ: L(·) ≜Eq [log p(x, z, π, u, φ) −log q(z, π, u, φ)] = Ldata + Lentropy + Lhdp-local + Lhdp-global. (5) 3 This function provides a lower bound on the marginal evidence: log p(x|γ, α, κ, ¯τ) ≥L. Improving this bound is equivalent to minimizing KL(q || p). Its four component terms are defined as follows: Ldata(x, ˆr, ˆτ) ≜Eq h log p(x | z, φ) + log p(φ) q(φ) i , Lhdp-local(ˆs, ˆθ, ˆρ, ˆω) ≜Eq h log p(z | π) + log p(π) q(π) i , Lentropy(ˆs) ≜−Eq [log q(z)] , Lhdp-global(ˆρ, ˆω) ≜Eq h log p(u) q(u) i . (6) Detailed analytic expansions for each term are available in the supplement. 3.2 Tractable Posterior Inference for Global State Probabilities Previous variational methods for the HDP-HMM [7], and for HDP topic models [16] and HDP grammars [17], used a zero-variance point estimate for the top-level state probabilities β. While this approximation simplifies inference, the variational objective no longer bounds the marginal evidence. Such pseudo-bounds are unsuitable for model selection and can favor models with redundant states that do not explain any data, but nevertheless increase computational and storage costs [14]. Because we seek to learn compact and interpretable models, and automatically adapt the truncation level K to each dataset, we instead place a proper beta distribution on uk, k ∈1, 2, . . .K: q(uk) ≜Beta(ˆρkˆωk, (1−ˆρk)ˆωk), where ˆρk ∈(0, 1), ˆωk > 0. (7) Here ˆρk = Eq(u)[uk], Eq(u)[βk] = ˆρkE[β>k−1], and Eq(u)[β>k] = Qk ℓ=1(1−ˆρℓ). The scalar ˆωk controls the variance, where the zero-variance point estimate is recovered as ˆωk →∞. The beta factorization in Eq. (7) complicates evaluation of the marginal likelihood bound in Eq. (6): Lhdp-local(ˆs, ˆθ, ˆρ, ˆω) = Eq(u)[cD(α0β)] + PK k=1 Eq(u)[cD(αβ + κδk)] −PK k=0 cD(ˆθk) + PK k=0 PK+1 ℓ=1 (Mkℓ(ˆs) + αkEq(u)[βℓ] + κδk(ℓ) −ˆθkℓ)Pkℓ(ˆθ). (8) The Dirichlet cumulant function cD maps K +1 positive parameters to a log-normalization constant. For a non-sticky HDP-HMM where κ = 0, previous work [14] established the following bound: cD(αβ) ≜log Γ(α) −PK+1 k=1 log Γ(αβk) ≥K log α + PK+1 ℓ=1 log βℓ. (9) Direct evaluation of Eq(u)[cD(αβ)] is problematic because the expectations of log-gamma functions have no closed form, but the lower bound has a simple expectation given beta distributed q(uk). Developing a similar bound for sticky models with κ > 0 requires a novel contribution. To begin, in the supplement we establish the following bound for any κ > 0, α > 0: cD(αβ + κδk) ≥K log α −log(α + κ) + log(αβk + κ) + PK+1 ℓ=1 ℓ̸=k log(βℓ). (10) To handle the intractable term Eq(u)[log(αβk + κ)], we leverage the concavity of the logarithm: log(αβk + κ) ≥βk log(α + κ) + (1 −βk) log κ. (11) Combining Eqs. (10) and (11) and taking expectations, we can evaluate a lower bound on Eq. (8) in closed form, and thereby efficiently optimize its parameters. As illustrated in Fig. 2, this rigorous lower bound on the marginal evidence log p(x) is quite accurate for practical hyperparameters. 3.3 Batch and Stochastic Variational Inference Most variational inference algorithms maximize L via coordinate ascent optimization, where the best value of each parameter is found given fixed values for other variational factors. For the HDPHMM this leads to the following updates, which when iterated converge to some local maximum. Local update to q(zn). The assignments for each sequence zn can be updated independently via dynamic programming [18]. The forward-backward algorithm takes as input a Tn × K matrix of log-likelihoods Eq[log p(xn | φk)] given the current ˆτ, and log transition probabilities Pjk given the current ˆθ. It outputs the optimal marginal state probabilities ˆsn, ˆrn under objective L. This step has cost O(TnK2) for sequence n, and we can process multiple sequences in parallel for efficiency. Global update to q(φ). Conjugate priors lead to simple closed-form updates ˆτk = ¯τ + Sk, where sufficient statistic Sk summarizes the data assigned to state k: Sk ≜PN n=1 PTn t=1 ˆrntksF (xnt). Global update to q(π). For each state k ∈{0, 1, 2 . . .K}, the positive vector ˆθk defining the optimal Dirichlet posterior on transition probabilities from state k is ˆθkℓ= Mkℓ(ˆs) + αβℓ+ κδk(ℓ). Statistic Mkℓ(ˆs) counts the expected number of transitions from state k to ℓacross all sequences. 4 Global update to q(u). Due to non-conjugacy, our surrogate objective L has no closed-form update to q(u). Instead, we employ numerical optimization to update vectors ˆρ, ˆω simultaneously: arg max ˆρ,ˆω Lhdp-local(ˆρ, ˆω, ˆθ, ˆs) + Lhdp-global(ˆρ, ˆω) subject to ˆωk > 0, ˆρk ∈(0, 1) for k = 1, 2 . . .K. Details are in the supplement. The update to q(u) requires expectations under q(π), and vice versa, so it can be useful to iteratively optimize q(π) and q(u) several times given fixed local statistics. To handle large datasets, we can adapt these updates to perform stochastic variational inference (SVI) [19]. Stochastic algorithms perform local updates on random subsets of sequences (batches), and then perturb global parameters by following a noisy estimate of the natural gradient, which has a simple closed form. SVI has previously been applied to non-sticky HDP-HMMs with pointestimated β [7], and can be easily adapted to our more principled objective. One drawback of SVI is the requirement of a learning rate schedule, which must typically be tuned to each dataset. 3.4 Memoized Variational Inference We now outline a memoized algorithm [13] for our sticky HDP-HMM variational objective. Before execution, each sequence is randomly assigned to one of B batches. The algorithm repeatedly visits batches one at a time in random order; we call each full pass through the complete set of B batches a lap. At each visit to batch b, we perform a local step for all sequences n in batch b and then a global step. With B = 1 batches, memoized inference reduces to the standard full-dataset algorithm, while with larger B we have more affordable local steps and faster overall convergence. With just one lap, memoized inference is equivalent to the synchronous version of streaming variational inference, presented in Alg. 3 of Broderick et al. [20]. We focus on regimes where dozens of laps are feasible, which we demonstrate dramatically improves performance. Affordable, but exact, batch optimization of L is possible by exploiting the additivity of statistics M, S. For each statistic we track a batch-specific quantity M b, and a whole-dataset summary M ≜PB b=1 M b. After a local step at batch b yields ˆsb, ˆrb, we update M b(ˆsb) and Sb(ˆrb), increment each whole-dataset statistic by adding the new batch summary and subtracting the summary stored in memory from the previous visit, and store (or memoize) the new statistics for future iterations. This update cycle makes M and S consistent with the most recent assignments for all sequences. Memoization does require O(BK2) more storage than SVI. However, this cost does not scale with the number of sequences N or length T . Sparsity in transition counts M may make storage cheaper. At any point during memoized execution, we can evaluate L exactly for all data seen thus far. This is possible because nearly all terms in Eq. (6) are functions of only global parameters ˆρ, ˆω, ˆθ, ˆτ and sufficient statistics M, S. The one exception that requires local values ˆs, ˆr is the entropy term Lentropy. To compute it, we track a (K + 1) × K matrix Hb at each batch b: Hb 0ℓ= −P n ˆrn1ℓlog ˆrn1ℓ, Hb kℓ= −P n PTn−1 t=1 ˆsntkℓlog ˆsntkℓ ˆrntk , (12) where the sums aggregate sequences n that belong to batch b. Each entry of Hb is non-negative, and given the whole-dataset entropy matrix H = PB b=1 Hb, we have Lentropy = PK k=0 PK ℓ=1 Hkℓ. 4 State Space Adaptation via Birth, Merge, and Delete Proposals Reliable nonparametric inference algorithms must quickly identify and create missing states. Splitmerge samplers for HDP topic models [10, 11] are limited because proposals can only split an existing state into two new states, require expensive traversal of all data points to evaluate an acceptance ratio, and often have low acceptance rates [12]. Some variational methods for HDP topic models also dynamically create new topics [16, 21], but do not guarantee improvement of the global objective and can be unstable. We instead interleave stochastic birth proposals with delete and merge proposals, and use memoization to efficiently verify proposals via the exact full-dataset objective. Birth proposals. Birth moves can create many new states at once while maintaining the monotonic increase of the whole-dataset objective, L. Each proposal happens within the local step by trying to improve q(zn) for a single sequence n. Given current assignments ˆsn, ˆrn with truncation K, the move proposes new assignments ˆs′ n, ˆr′ n that include the K existing states and some new states with index k > K. If L improves under the proposal, we accept and use the expanded set of states for all remaining updates in the current lap. To compute L, we require candidate global parameters ˆρ′, ˆω′, ˆθ′, ˆτ ′. These are found via a global step from candidate summaries M ′, S′, which combine 5 −30−15 0 15 30 −30 −15 0 15 30 1 10 100 1000 num pass thru data *8 25 50 num topics K 1 10 100 1000 num pass thru data 0.0 0.2 0.4 0.6 0.8 Hamming dist. sampler stoch memo delete,merge birth,delete,merge Non-stick, kappa=0 Sticky, kappa=50 stoch: K=47 after 2000 laps in 359 min. 0 200 400 600 800 sampler: K=10 after 2000 laps in 74 min. 0 200 400 600 800 delete,merge: K=8 after 100 laps in 5 min. 0 200 400 600 800 Figure 3: Toy data experiments (Sec. 5). Top left: Data sequences contain 2D points from 8 well-separated Gaussians with sticky transitions. Top center: Trace plots from initialization with 50 redundant states. Our state-adaptation algorithms (red/purple) reach ideal K = 8 states and zero Hamming distance regardless of whether a sticky (solid) or non-sticky (dashed) model is used. Competitors converge slower, especially in the non-sticky case because non-adaptive methods are more sensitive to hyperparameters. Bottom: Segmentations of 4 sequences by SVI, the Gibbs sampler, and our method under the non-sticky model (κ = 0). Top half shows true state assignments; bottom shows aligned estimated states. Competitors are polluted by extra states (black). the new batch statistics M ′ b, S′ b and memoized statistics of other batches M ′ \b, S′ \b expanded by zeros for states k > K. See the supplement for details on handling multiple sequences within a batch. The proposal for expanding ˆs′, ˆr′ with new states can flexibly take any form, from very na¨ıve to very data-driven. For data with “sticky” state persistence, we recommend randomly choosing one interval [t, t + δ] of the current sequence to reassign when creating ˆs′, ˆr′, leaving other timesteps fixed. We split this interval into two contiguous blocks (one may be empty), each completely assigned to a new state. In the supplement, we detail a linear-time search that finds the cut point that maximizes the objective Ldata. Other proposals such as sub-cluster splits [11] could be easily incorporated in our variational algorithm, but we find this simple interval-based proposal to be fast and effective. Merge proposals. Merge proposals try to find a less redundant but equally expressive model. Each proposal takes a pair of existing states i < j and constructs a candidate model where data from state j is reassigned to state i. Conceptually this reassignment gives a new value ˆs′, but instead statistics M ′, S′ can be directly computed and used in a global update for candidate parameters ˆτ′, ˆρ′, ˆθ′. S′ i = Si + Sj, M ′ :i = M:i + M:j, M ′ i: = Mi: + Mj:, M ′ ii = Mii + Mjj + Mji + Mij. While most terms in L are linear functions of our cached sufficient statistics, the entropy Lentropy is not. Thus for each candidate merge pair (i, j), we use O(K) storage and computation to track column H′ :i and row H′ i: of the corresponding merged entropy matrix H′. Because all terms in the H′ matrix of Eq. (12) are non-negative, we can lower-bound Lentropy by summing a subset of H′. As detailed in the supplement, this allows us to rigorously bound the objective L′ for accepting multiple merges of distinct state pairs. Because many entries of H′ are near-zero, this bound is very tight, and in practice enables us to scalably merge many redundant state pairs in each lap through the data. To identify candidate merge pairs i, j, we examine all pairs of states and keep those that satisfy L′ data+L′ hdp-local+L′ hdp-global > Ldata+Lhdp-local+Lhdp-global. Because entropy must decrease after any merge (L′ entropy < Lentropy), this test is guaranteed to find all possibly useful merges. It is much more efficient than the heuristic correlation score used in prior work on HDP topic models [14]. Deletes. Our proposal to delete a rarely-used state j begins by dropping row j and column j from M to create M ′, and dropping Sj from S to create S′. Using a target dataset of sequences with non-trivial mass on state j, x′ = {xn : PTn t=1 ˆrntj > 0.01}, we run global and local parameter updates to reassign observations from former state j in a data-driven way. Rather than verifying on only the target dataset as in [14], we accept or reject the delete proposal via the whole-dataset bound L. To control computation, we only propose deleting states used in 10 or fewer sequences. 6 stoch memo birth,del,merge K=50 K=100 1 10 100 num pass thru data -3.6 -3.5 -3.4 objective (x100) 0.1 1 10 100 num pass thru data 0 25 50 75 100 num states K 1 2 4 8 16 32 64 num parallel workers 0 200 400 600 800 1000 1200 time (sec) 1 2 4 8 16 32 64 num parallel workers 1x 2x 4x 8x 16x 32x 64x speedup Figure 4: Segmentation of human epigenome: 15 million observations across 173 sequences (Sec. 5). Left: Adaptive runs started at 1 state grow to 70 states within one lap and reach better L scores than 100-state nonadaptive methods. Each run takes several days. Right: Wallclock times and speedup factors for a parallelized local step on 1/3 of this dataset. 64 workers complete a local step with K = 50 states in under one minute. 5 Experiments We compare our proposed birth-merge-delete memoized algorithm to memoized with delete and merge moves only, and without any moves. We further run a blocked Gibbs sampler [6] that was previously shown to mix faster than slice samplers [22], and our own implementation of SVI for objective L. These baselines maintain a fixed number of states K, though some states may have usage fall to zero. We start all fixed-K methods (including the sampler) from matched initializations. See the supplement for futher discussion and all details needed to reproduce these experiments. Toy data. In Fig. 3, we study 32 toy data sequences generated from 8 Gaussian states with sticky transitions [8]. From an abundant initialization with 50 states, the sampler and non-adaptive variational methods require hundreds of laps to remove redundant states, especially under a non-sticky model (κ = 0). In contrast, our adaptive methods reach the ideal of zero Hamming distance within a few dozen laps regardless of stickiness, suggesting less sensitivity to hyperparameters. Speaker diarization. We study 21 unrelated audio recordings of meetings with an unknown number of speakers from the NIST 2007 speaker diarization challenge [23]. The sticky HDP-HMM previously achieved state-of-the-art diarization performance [6] using a sampler that required hours of computation. We ran methods from 10 matched initializations with 25 states and κ = 100, computing Hamming distance on non-speech segments as in the standard DER metric. Fig. 5 shows that within minutes, our algorithms consistently find segmentations better aligned to true speaker labels. Labelled N = 6 motion capture. Fox et al. [12] introduced a 6 sequence dataset with labels for 12 exercise types, illustrated in Fig. 1. Each sequence has 12 joint angles (wrist, knee, etc.) captured at 0.1 second intervals. Fig. 6 shows that non-adaptive methods struggle even when initialized abundantly with 30 (dashed lines) or 60 (solid) states, while our adaptive methods reach better values of the objective L and cleaner many-to-one alignment to true exercises. Large N = 124 motion capture. Next, we apply scalable methods to the 124 sequence dataset of [12]. We lack ground truth here, but Fig. 7 shows deletes and merges making consistent reductions from abundant initializations and births growing from K = 1. Fig. 7 also shows estimated segmentations for 10 representative sequences, along with skeleton illustrations for the 10 most-used states in this subset. These segmentations align well with held-out text descriptions. Chromatin segmentation. Finally, we study segmenting the human genome by the appearance patterns of regulatory proteins [24]. We observe 41 binary signals from [3] at 200bp intervals throughout a white blood cell line (CD4T). Each binary value indicates the presence or absence of an acetylation or methylation that controls gene expression. We divide the whole epigenome into 173 sequences (one per batch) with total size T = 15.4 million. Fig. 4 shows our method can grow from 1 state to 70 states and compete favorably with non-adaptive competitors. We also demonstrate that our parallelized local step leads to big 25x speedups in processing such large datasets. 6 Conclusion Our new variational algorithms adapt HMM state spaces to find clean segmentations driven by Bayesian model selection. Relative to prior work [14], our contributions include a new bound for the sticky HDP-HMM, births with guaranteed improvement, local step parallelization, and better merge selection rules. Our multiprocessing-based Python code is targeted at genome-scale applications. Acknowledgments This research supported in part by NSF CAREER Award No. IIS-1349774. M. Hughes supported in part by an NSF Graduate Research Fellowship under Grant No. DGE0228243. 7 Meeting 11 (best) Meeting 16 (avg.) Meeting 21 (worst) 0.0 0.1 0.2 0.3 0.4 0.5 0.6 delete-merge Hamming 0.0 0.1 0.2 0.3 0.4 0.5 0.6 sampler Hamming 1 10 100 1000 elapsed time (sec) −2.80 −2.75 −2.70 −2.65 −2.60 −2.55 −2.50 train objective 1 10 100 1000 elapsed time (sec) −2.70 −2.65 −2.60 −2.55 train objective 1 10 100 1000 elapsed time (sec) −2.55 −2.50 −2.45 −2.40 train objective sampler memo delete,merge birth,delete,merge 1 10 100 1000 elapsed time (sec) 0.0 0.2 0.4 0.6 0.8 Hamming dist. 1 10 100 1000 elapsed time (sec) 0.0 0.2 0.4 0.6 0.8 Hamming dist. 1 10 100 1000 elapsed time (sec) 0.0 0.2 0.4 0.6 0.8 Hamming dist. Figure 5: Method comparison on speaker diarization from common K = 25 initializations (Sec. 5). Left: Scatterplot of final Hamming distance for our adaptive method and the sampler. Across 21 meetings (each with 10 initializations shown as individual dots) our method finds segmentations closer to ground truth. Right: Traces of objective L and Hamming distance for meetings representative of good, average, and poor performance. 1 10 100 1000 num pass thru data −2.8 −2.6 −2.4 −2.2 train objective 1 10 100 1000 num pass thru data 0 20 40 60 num states K 1 10 100 1000 num pass thru data 0.0 0.2 0.4 0.6 0.8 Hamming dist. stoch sampler memo delete,merge birth,delete,merge birth: Hdist=0.34 K=28 @ 100 laps 0 50 100 150 200 250 300 350 400 del/merge: Hdist=0.30 K=13 @ 100 laps 0 50 100 150 200 250 300 350 400 sampler: Hdist=0.49 K=29 @ 1000 laps 0 50 100 150 200 250 300 350 400 Figure 6: Comparison on 6 motion capture streams (Sec. 5). Top: Our adaptive methods reach better L values and lower distance from true exercise labels. Bottom: Segmentations from the best runs of birth/merge/delete (left), only deletes and merges from 30 initial states (middle), and the sampler (right). Each sequence shows true labels (top half) and estimates (bottom half) colored by the true state with highest overlap (many-to-one). 1 10 100 1000 num pass thru data −2.6 −2.5 −2.4 train objective 1 10 1001000 num pass thru data 0 50 100 150 200 num states K ! "! #! $! %! &!! 1-1: playground jump 1-2: playground climb 1-3: playground climb 2-7: swordplay 5-3: dance 5-4: dance 5-5: dance 6-3: basketball dribble 6-4: basketball dribble 6-5: basketball dribble Walk Climb Sword Arms Swing Dribble Jump Balance Ballet Leap Ballet Pose Figure 7: Study of 124 motion capture sequences (Sec. 5). Top Left: Objective L and state count K as more data is seen. Solid lines have 200 initial states; dashed 100. Top Right: Final segmentation of 10 select sequences by our method, with id numbers and descriptions from mocap.cs.cmu.edu. The 10 most used states are shown in color, the rest with gray. Bottom: Time-lapse skeletons assigned to each highlighted state. 8 References [1] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc. of the IEEE, 77(2):257–286, 1989. [2] Zoubin Ghahramani. An introduction to hidden Markov models and Bayesian networks. International Journal of Pattern Recognition and Machine Intelligence, 15(01):9–42, 2001. [3] Jason Ernst and Manolis Kellis. Discovery and characterization of chromatin states for systematic annotation of the human genome. Nature Biotechnology, 28(8):817–825, 2010. [4] Matthew J. Beal, Zoubin Ghahramani, and Carl E. Rasmussen. The infinite hidden Markov model. In Neural Information Processing Systems, 2001. [5] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581, 2006. [6] Emily B. Fox, Erik B. Sudderth, Michael I. Jordan, and Alan S. Willsky. A sticky HDP-HMM with application to speaker diarization. Annals of Applied Statistics, 5(2A):1020–1056, 2011. [7] Matthew J. Johnson and Alan S. Willsky. Stochastic variational inference for Bayesian time series models. In International Conference on Machine Learning, 2014. [8] Nicholas Foti, Jason Xu, Dillon Laird, and Emily Fox. Stochastic variational inference for hidden Markov models. In Neural Information Processing Systems, 2014. [9] Andreas Stolcke and Stephen Omohundro. Hidden Markov model induction by Bayesian model merging. In Neural Information Processing Systems, 1993. [10] Chong Wang and David M Blei. A split-merge MCMC algorithm for the hierarchical Dirichlet process. arXiv preprint arXiv:1201.1657, 2012. [11] Jason Chang and John W Fisher III. Parallel sampling of HDPs using sub-cluster splits. In Neural Information Processing Systems, 2014. [12] Emily B. Fox, Michael C. Hughes, Erik B. Sudderth, and Michael I. Jordan. Joint modeling of multiple time series via the beta process with application to motion capture segmentation. Annals of Applied Statistics, 8(3):1281–1313, 2014. [13] Michael C. Hughes and Erik B. Sudderth. Memoized online variational inference for Dirichlet process mixture models. In Neural Information Processing Systems, 2013. [14] Michael C. Hughes, Dae Il Kim, and Erik B. Sudderth. Reliable and scalable variational inference for the hierarchical Dirichlet process. In Artificial Intelligence and Statistics, 2015. [15] Yee Whye Teh, Kenichi Kurihara, and Max Welling. Collapsed variational inference for HDP. In Neural Information Processing Systems, 2008. [16] Michael Bryant and Erik B. Sudderth. Truly nonparametric online variational inference for hierarchical Dirichlet processes. In Neural Information Processing Systems, 2012. [17] Percy Liang, Slav Petrov, Michael I Jordan, and Dan Klein. The infinite PCFG using hierarchical Dirichlet processes. In Empirical Methods in Natural Language Processing, 2007. [18] Matthew James Beal. Variational algorithms for approximate Bayesian inference. PhD thesis, University of London, 2003. [19] Matt Hoffman, David Blei, Chong Wang, and John Paisley. Stochastic variational inference. Journal of Machine Learning Research, 14(1), 2013. [20] Tamara Broderick, Nicholas Boyd, Andre Wibisono, Ashia C. Wilson, and Michael I. Jordan. Streaming variational Bayes. In Neural Information Processing Systems, 2013. [21] Chong Wang and David Blei. Truncation-free online variational inference for Bayesian nonparametric models. In Neural Information Processing Systems, 2012. [22] J. Van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani. Beam sampling for the infinite hidden Markov model. In International Conference on Machine Learning, 2008. [23] NIST. Rich transcriptions database. http://www.nist.gov/speech/tests/rt/, 2007. [24] Michael M. Hoffman, Orion J. Buske, Jie Wang, Zhiping Weng, Jeff A. Bilmes, and William S. Noble. Unsupervised pattern discovery in human chromatin structure through genomic segmentation. Nature methods, 9(5):473–476, 2012. 9 | 2015 | 77 |
5,969 | Inference for determinantal point processes without spectral knowledge R´emi Bardenet∗ CNRS & CRIStAL UMR 9189, Univ. Lille, France remi.bardenet@gmail.com Michalis K. Titsias∗ Department of Informatics Athens Univ. of Economics and Business, Greece mtitsias@aueb.gr ∗Both authors contributed equally to this work. Abstract Determinantal point processes (DPPs) are point process models that naturally encode diversity between the points of a given realization, through a positive definite kernel K. DPPs possess desirable properties, such as exact sampling or analyticity of the moments, but learning the parameters of kernel K through likelihood-based inference is not straightforward. First, the kernel that appears in the likelihood is not K, but another kernel L related to K through an often intractable spectral decomposition. This issue is typically bypassed in machine learning by directly parametrizing the kernel L, at the price of some interpretability of the model parameters. We follow this approach here. Second, the likelihood has an intractable normalizing constant, which takes the form of a large determinant in the case of a DPP over a finite set of objects, and the form of a Fredholm determinant in the case of a DPP over a continuous domain. Our main contribution is to derive bounds on the likelihood of a DPP, both for finite and continuous domains. Unlike previous work, our bounds are cheap to evaluate since they do not rely on approximating the spectrum of a large matrix or an operator. Through usual arguments, these bounds thus yield cheap variational inference and moderately expensive exact Markov chain Monte Carlo inference methods for DPPs. 1 Introduction Determinantal point processes (DPPs) are point processes [1] that encode repulsiveness using algebraic arguments. They first appeared in [2], and have since then received much attention, as they arise in many fields, e.g. random matrix theory, combinatorics, quantum physics. We refer the reader to [3, 4, 5] for detailed tutorial reviews, respectively aimed at audiences of machine learners, statisticians, and probabilists. More recently, DPPs have been considered as a modelling tool, see e.g. [4, 3, 6]: DPPs appear to be a natural alternative to Poisson processes when realizations should exhibit repulsiveness. In [3], for example, DPPs are used to model diversity among summary timelines in a large news corpus. In [7], DPPs model diversity among the results of a search engine for a given query. In [4], DPPs model the spatial repartition of trees in a forest, as similar trees compete for nutrients in the ground, and thus tend to grow away from each other. With these modelling applications comes the question of learning a DPP from data, either through a parametrized form [4, 7], or non-parametrically [8, 9]. We focus in this paper on parametric inference. 1 Similarly to the correlation between the function values in a Gaussian process (GPs; [10]), the repulsiveness in a DPP is defined through a kernel K, which measures how much two points in a realization repel each other. The likelihood of a DPP involves the evaluation and the spectral decomposition of an operator L defined through a kernel L that is related to K. There are two main issues that arise when performing likelihood-based inference for a DPP. First, the likelihood involves evaluating the kernel L, while it is more natural to parametrize K instead, and there is no easy link between the parameters of these two kernels. The second issue is that the spectral decomposition of the operator L required in the likelihood evaluation is rarely available in practice, for computational or analytical reasons. For example, in the case of a large finite set of objects, as in the news corpus application [3], evaluating the likelihood once requires the eigendecomposition of a large matrix. Similarly, in the case of a continuous domain, as for the forest application [4], the spectral decomposition of the operator L may not be analytically tractable for nontrivial choices of kernel L. In this paper, we focus on the second issue, i.e., we provide likelihoodbased inference methods that assume the kernel L is parametrized, but that do not require any eigendecomposition, unlike [7]. More specifically, our main contribution is to provide bounds on the likelihood of a DPP that do not depend on the spectral decomposition of the operator L. For the finite case, we draw inspiration from bounds used for variational inference of GPs [11], and we extend these bounds to DPPs over continuous domains. For ease of presentation, we first consider DPPs over finite sets of objects in Section 2, and we derive bounds on the likelihood. In Section 3, we plug these bounds into known inference paradigms: variational inference and Markov chain Monte Carlo inference. In Section 4, we extend our results to the case of a DPP over a continuous domain. Readers who are only interested in the finite case, or who are unfamiliar with operator theory, can safely skip Section 4 without missing our main points. In Section 5, we experimentally validate our results, before discussing their breadth in Section 6. 2 DPPs over finite sets 2.1 Definition and likelihood Consider a discrete set of items Y = {x1, . . . , xn}, where xi ⊂Rd is a vector of attributes that describes item i. Let K be a symmetric positive definite kernel [12] on Rd, and let K = ((K(xi, xj))) be the Gram matrix of K. The DPP of kernel K is defined as the probability distribution over all possible 2n subsets Y ⊆Y such that P(A ⊂Y ) = det(KA), (1) where KA denotes the sub-matrix of K indexed by the elements of A. This distribution exists and is unique if and only if the eigenvalues of K are in [0, 1] [5]. Intuitively, we can think of K(x, y) as encoding the amount of negative correlation, or “repulsiveness” between x and y. Indeed, as remarked in [3], (1) first yields that diagonal elements of K are marginal probabilities: P(xi ∈Y ) = Kii. Equation (1) then entails that xi and xj are likely to co-occur in a realization of Y if and only if det K{xi,xj} = K(xi, xi)K(yi, yi) −K(xi, xj)2 = P(xi ∈Y )P(xj ∈Y ) −K2 ij is large: off-diagonal terms in K indicate whether points tend to co-occur. Providing the eigenvalues of K are further restricted to be in [0, 1), the DPP of kernel K has a likelihood [1]. More specifically, writing Y1 for a realization of Y , P(Y = Y1) = det LY1 det(L + I), (2) where L = (I −K)−1K, I is the n × n identity matrix, and LY1 denotes the sub-matrix of L indexed by the elements of Y1. Now, given a realization Y1, we would like to infer the parameters of kernel K, say the parameters θK = (aK, σK) ∈(0, ∞)2 of a squared exponential kernel [10] K(x, y) = aK exp −∥x −y∥2 2σ2 K . (3) 2 Since the trace of K is the expected number of points in Y [5], one can estimate aK by the number of points in the data divided by n [4]. But σK, the repulsive “lengthscale”, has to be fitted. If the number of items n is large, likelihood-based methods such as maximum likelihood are too costly: each evaluation of (2) requires O(n2) storage and O(n3) time. Furthermore, valid choices of θK are constrained, since one needs to make sure the eigenvalues of K remain in [0, 1). A partial work-around is to note that given any symmetric positive definite kernel L, the likelihood (2) with matrix L = ((L(xi, xj))) corresponds to a valid choice of K, since the corresponding matrix K = L(I+L)−1 necessarily has eigenvalues in [0, 1], which makes sure the DPP exists [5]. The work-around consists in directly parametrizing and inferring the kernel L instead of K, so that the numerator of (2) is cheap to evaluate, and parameters are less constrained. Note that this step favours tractability over interpretability of the inferred parameters: if we assume L to take the squared exponential form (3) instead of K, with parameters aL and σL, the number of points and the repulsiveness of the points in Y do not decouple as nicely. For example, the expected number of items in Y depends on aL and σL now, and both parameters also significantly affect repulsiveness. There is some work investigating approximations to K to retain the more interpretable parametrization [4], but the machine learning literature [3, 7] almost exclusively adopts the more tractable parametrization of L. In this paper, we also make this choice of parametrizing L directly. Now, the computational bottleneck in the evaluation of (2) is computing det(L + I). While this still prevents the application of maximum likelihood, bounds on this determinant can be used in a variational approach or an MCMC algorithm. In [7], bounds on det(L + I) are proposed, requiring only the first m eigenvalues of L, where m is chosen adaptively at each MCMC iteration to make the acceptance decision possible. This still requires applying power iteration methods, which are limited to finite domains, require storing the whole n×n matrix L, and are prohibitively slow when the number of required eigenvalues m is large. 2.2 Nonspectral bounds on the likelihood Let us denote by LAB the submatrix of L where row indices correspond to the elements of A, and column indices to those of B. When A = B, we simply write LA for LAA, and we drop the subscript when A = Y. Drawing inspiration from sparse approximations to Gaussian processes using inducing variables [11], we let Z = {z1, . . . , zm} be an arbitrary set of points in Rd, and we approximate L by Q = LYZ[LZ]−1LZY. Note that we do not constrain Z to belong to Y, so that our bounds do not rely on a Nystr¨om-type approximation [13]. We term Z “pseudo-inputs”, or “inducing inputs”. Proposition 1. 1 det(Q + I)e−tr(L−Q) ≤ 1 det(L + I) ≤ 1 det(Q + I). (4) The proof relies on a nontrivial inequality on determinants [14, Theorem 1], and is provided in the supplementary material. 3 Learning a DPP using bounds In this section, we explain how to run variational inference and Markov chain Monte Carlo methods using the bounds in Proposition 1. In this section, we also make connections with variational sparse Gaussian processes more explicit. 3.1 Variational inference The lower bound in Proposition 1 can be used for variational inference. Assume we have T point process realizations Y1, . . . , YT , and we fit a DPP with kernel L = Lθ. The log 3 likelihood can be expressed using (2) ℓ(θ) = T X i=1 log det(LYt) −T log det(L + I). (5) Let Z be an arbitrary set of m points in Rd. Proposition 1 then yields a lower bound F(θ, Z) ≜ T X t=1 log det(LYt) −T log det(Q + I) + Ttr (L −Q) ≤ℓ(θ). (6) The lower bound F(θ, Z) can be computed efficiently in O(nm2) time, which is considerably lower than a power iteration in O(n2) if m ≪n. Instead of maximizing ℓ(θ), we thus maximize F(θ, Z) jointly w.r.t. the kernel parameters θ and the variational parameters Z. To maximize (8), one can e.g. implement an EM-like scheme, alternately optimizing in Z and θ. Kernels are often differentiable with respect to θ, and sometimes F will also be differentiable with respect to the pseudo-inputs Z, so that gradient-based methods can help. In the general case, black-box optimizers such as CMA-ES [15], can also be employed. 3.2 Markov chain Monte Carlo inference If approximate inference is not suitable, we can use the bounds in Proposition 1 to build a more expensive Markov chain Monte Carlo [16] sampler. Given a prior distribution p(θ) on the parameters θ of L, Bayesian inference relies on the posterior distribution π(θ) ∝ exp(ℓ(θ))p(θ), where the log likelihood ℓ(θ) is defined in (7). A standard approach to sample approximately from π(θ) is the Metropolis-Hastings algorithm (MH; [16, Chapter 7.3]). MH consists in building an ergodic Markov chain of invariant distribution π(θ). Given a proposal q(θ′|θ), the MH algorithm starts its chain at a user-defined θ0, then at iteration k + 1 it proposes a candidate state θ′ ∼q(·|θk) and sets θk+1 to θ′ with probability α(θk, θ′) = min " 1, eℓ(θ′)p(θ′) eℓ(θk)p(θk) q(θk|θ′) q(θ′|θk) # (7) while θk+1 is otherwise set to θk. The core of the algorithm is thus to draw a Bernoulli variable with parameter α = α(θ, θ′) for θ, θ′ ∈Rd. This is typically implemented by drawing a uniform u ∼U[0,1] and checking whether u < α. In our DPP application, we cannot evaluate α. But we can use Proposition 1 to build a lower and an upper bound ℓ(θ) ∈[b−(θ, Z), b+(θ, Z)], which can be arbitrarily refined by increasing the cardinality of Z and optimizing over Z. We can thus build a lower and upper bound for α b−(θ′, Z′) −b+(θ, Z) + log p(θ′) p(θ) ≤log α ≤b+(θ′, Z′) −b−(θ, Z) + log p(θ′) p(θ) . (8) Now, another way to draw a Bernoulli variable with parameter α is to first draw u ∼U[0,1], and then refine the bounds in (13), by augmenting the numbers |Z|, |Z′| of inducing variables and optimizing over Z, Z′, until1 log u is out of the interval formed by the bounds in (13). Then one can decide whether u < α. This Bernoulli trick is sometimes named retrospective sampling and has been suggested as early as [17]. It has been used within MH for inference on DPPs with spectral bounds in [7], we simply adapt it to our non-spectral bounds. 4 The case of continuous DPPs DPPs can be defined over very general spaces [5]. We limit ourselves here to point processes on X ⊂Rd such that one can extend the notion of likelihood. In particular, we define here a DPP on X as in [1, Example 5.4(c)], by defining its Janossy density. For definitions of traces and determinants of operators, we follow [18, Section VII]. 1Note that this necessarily happens under fairly weak assumptions: saying that the upper and lower bounds in (4) match when m goes to infinity is saying that the integral of the posterior variance of a Gaussian process with no evaluation error goes to zero as we add more distinct training points. 4 4.1 Definition Let µ be a measure on (Rd, B(Rd)) that is continuous with respect to the Lebesgue measure, with density µ′. Let L be a symmetric positive definite kernel. L defines a self-adjoint operator on L2(µ) through L(f) ≜ R L(x, y)f(y)dµ(y). Assume L is trace-class, and tr(L) = Z X L(x, x)dµ(x). (9) We assume (14) to avoid technicalities. Proving (14) can be done by requiring various assumptions on L and µ. Under the assumptions of Mercer’s theorem, for instance, (14) will be satisfied [18, Section VII, Theorem 2.3]. More generally, the assumptions of [19, Theorem 2.12] apply to kernels over noncompact domains, in particular the Gaussian kernel with Gaussian base measure that is often used in practice. We denote by λi the eigenvalues of the compact operator L. There exists [1, Example 5.4(c)] a simple2 point process on Rd such that P There are n particles, one in each of the infinitesimal balls B(xi, dxi) = det((L(xi, xj)) det(I + L) µ′(x1) . . . µ′(xn), (10) where B(x, r) is the open ball of center x and radius r, and where det(I +L) ≜Q∞ i=1(λi+1) is the Fredholm determinant of operator L [18, Section VII]. Such a process is called the determinantal point process associated to kernel L and base measure µ.3 Equation (15) is the continuous equivalent of (2). Our bounds require Ψ to be computable. This is the case for the popular Gaussian kernel with Gaussian base measure. 4.2 Nonspectral bounds on the likelihood In this section, we derive bounds on the likelihood (15) that do not require to compute the Fredholm determinant det(I + L). Proposition 2. Let Z = {z1, . . . , zm} ⊂Rd, then det LZ det(LZ + Ψ)e−R L(x,x)dµ(x)+tr(L−1 Z Ψ) ≤ 1 det(I + L) ≤ det LZ det(LZ + Ψ), (11) where LZ = ((L(zi, zj)) and Ψij = R L(zi, x)L(x, zj)dµ(x). As for Proposition 1, the proof relies on a nontrivial inequality on determinants [14, Theorem 1] and is provided in the supplementary material. We also detail in the supplementary material why (16) is the continuous equivalent to (4). 5 Experiments 5.1 A toy Gaussian continuous experiment In this section, we consider a DPP on R, so that the bounds derived in Section 4 apply. As in [7, Section 5.1], we take the base measure to be proportional to a Gaussian, i.e. its density is µ′(x) = κN(x|0, (2α)−2). We consider a squared exponential kernel L(x, y) = exp −ϵ2∥x −y∥2 . In this particular case, the spectral decomposition of operator L is known [20]4: the eigenfunctions of L are scaled Hermite polynomials, while the eigenvalues are a geometrically decreasing sequence. This 1D Gaussian-Gaussian example is interesting for two reasons: first, the spectral decomposition of L is known, so that we can sample exactly from the corresponding DPP [5] and thus generate synthetic datasets. Second, the Fredholm determinant det(I + L) in this special case is a q-Pochhammer symbol, and 2i.e., for which all points in a realization are distinct. 3There is a notion of kernel K for general DPPs [5], but we define L directly here, for the sake of simplicity. The interpretability issues of using L instead of K are the same as for the finite case, see Sections 2 and 5. 4We follow the parametrization of [20] for ease of reference. 5 can thus be efficiently computed, see e.g. the SymPy library in Python. This allows for comparison with “ideal” likelihood-based methods, to check the validity of our MCMC sampler, for instance. We emphasize that these special properties are not needed for the inference methods in Section 3, they are simply useful to demonstrate their correctness. We sample a synthetic dataset using (κ, α, ϵ) = (1000, 0.5, 1), resulting in 13 points shown in red in Figure 1(a). Applying the variational inference method of Section 3.1, jointly optimizing in Z and θ = (κ, α, ϵ) using the CMA-ES optimizer [15], yields poorly consistent results: κ varies over several orders of magnitude from one run to the other, and relative errors for α and ϵ go up to 100% (not shown). We thus investigate the identifiability of the parameters with the retrospective MH of Section 3.2. To limit the range of κ, we choose for (log κ, log α, log ϵ) a wide uniform prior over [200, 2000] × [−10, 10] × [−10, 10]. We use a Gaussian proposal, the covariance matrix of which is adapted on-the-fly [21] so as to reach 25% of acceptance. We start each iteration with m = 20 pseudo-inputs, and increase it by 10 and re-optimize when the acceptance decision cannot be made. Most iterations could be made with m = 20, and the maximum number of inducing inputs required in our run was 80. We show the results of a run of length 10 000 in Figure 5.1. Removing a burn-in sample of size 1000, we show the resulting marginal histograms in Figures 1(b), 1(c), and 1(d). Retrospective MH and the ideal MH agree. The prior pdf is in green. The posterior marginals of α and ϵ are centered around the values used for simulation, and are very different from the prior, showing that the likelihood contains information about α and ϵ. However, as expected, almost nothing is learnt about κ, as posterior and prior roughly coincide. This is an example of the issues that come with parametrizing L directly, as mentioned in Section 1. It is also an example when MCMC is preferable to the variational approach in Section 3. Note that this can be detected through the variability of the results of the variational approach across independent runs. To conclude, we show a set of optimized pseudo-inputs Z in black in Figure 1(a). We also superimpose the marginal of any single point in the realization, which is available through the spectral decomposition of L here [5]. In this particular case, this marginal is a Gaussian. Interestingly, the pseudo-inputs look like evenly spread samples from this marginal. Intuitively, they are likely to make the denominator in the likelihood (15) small, as they represent an ideal sample of the Gaussian-Gaussian DPP. 5.2 Diabetic neuropathy dataset Here, we consider a real dataset of spatial patterns of nerve fibers in diabetic patients. These fibers become more clustered as diabetes progresses [22]. The dataset consists of 7 samples collected from diabetic patients at different stages of diabetic neuropathy and one healthy subject. We follow the experimental setup used in [7] and we split the total samples into two classes: Normal/Mildly Diabetic and Moderately/Severely Diabetic. The first class contains three samples and the second one the remaining four. Figure 2 displays the point process data, which contain on average 90 points per sample in the Normal/Mildly class and 67 for the Moderately/Severely class. We investigate the differences between these classes by fitting a separate DPP to each class and then quantify the differences of the repulsion or overdispersion of the point process data through the inferred kernel parameters. Paraphrasing [7], we consider a continuous DPP on R2, with kernel function L(xi, xj) = exp − 2 X d=1 (xi,d −xj,d)2 2σ2 d ! , (12) and base measure proportional to a Gaussian µ′(x) = κ Q2 d=1 N(xd|µd, ρ2 d). As in [7], we quantify the overdispersion of realizations of such a Gaussian-Gaussian DPP through the quantities γd = σd/ρd, which are invariant to the scaling of x. Note however that, strictly speaking, κ also mildly influences repulsion. We investigate the ability of the variational method in Section 3.1 to perform approximate maximum likelihood training over the kernel parameters θ = (κ, σ1, σ2, ρ1, ρ2). Specifically, we wish to fit a separate continuous DPP to each class by jointly maximizing the 6 −10 −5 0 5 10 0.0 0.5 1.0 1.5 2.0 K(·, ·)µ′(·) (a) 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 ×103 0.0 0.5 1.0 1.5 2.0 2.5 ×10−3 κ prior pdf ideal MH MH w/ bounds 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 1 2 3 4 5 6 7 α prior pdf ideal MH MH w/ bounds 0.5 1.0 1.5 2.0 0.0 0.5 1.0 1.5 2.0 2.5 3.0 ϵ prior pdf ideal MH MH w/ bounds Figure 1: Results of adaptive Metropolis-Hastings in the 1D continuous experiment of Section 5.1. Figure 1(a) shows data in red, a set of optimized pseudo-inputs in black for θ set to the value used in the generation of the synthetic dataset, and the marginal of one point in the realization in blue. Figures 1(b), 1(c), and 1(d) show marginal histograms of κ, α, ϵ. variational lower bound over θ and the inducing inputs Z using gradient-based optimization. Given that the number of inducing variables determines the amount of the approximation, or compression of the DPP model, we examine different settings for this number and see whether the corresponding trained models provide similar estimates for the overdispersion measures. Thus, we train the DPPs under different approximations having m ∈{50, 100, 200, 400, 800, 1200} inducing variables and display the estimated overdispersion measures in Figures 3(a) and 3(b). These estimated measures converge to coherent values as m increases. They show a clear separation between the two classes, as also found in [7, 22]. This also suggests tuning m in practice by increasing it until inference results stop varying. Furthermore, Figures 3(c) and 3(d) show the values of the upper and lower bounds on the log likelihood, which as expected, converge to the same limit as m increases. We point out that the overall optimization of the variational lower bound is relatively fast in our MATLAB implementation. For instance, it takes 24 minutes for the most expensive run where m = 1200 to perform 1 000 iterations until convergence. Smaller values of m yield significantly smaller times. Finally, as in Section 5.1, we comment on the optimized pseudo-inputs. Figure 4 displays the inducing points at the end of a converged run of variational inference for various values of m. Similarly to Figure 1(a), these pseudo-inputs are placed in remarkably neat grids and depart significantly from their initial locations. 6 Discussion We have proposed novel, cheap-to-evaluate, nonspectral bounds on the determinants arising in the likelihoods of DPPs, both finite and continuous. We have shown how to use these bounds to infer the parameters of a DPP, and demonstrated their use for expensive-butexact MCMC and cheap-but-approximate variational inference. In particular, these bounds have some degree of freedom – the pseudo-inputs –, which we optimize so as to tighten the bounds. This optimization step is crucial for likelihood-based inference of parametric DPP models, where bounds have to adapt to the point where the likelihood is evaluated to yield 7 normal mild1 mild2 mod1 mod2 sev1 Figure 2: Six out of the seven nerve fiber samples. The first three samples (from left to right) correspond to a Normal Subject and two Mildly Diabetic Subjects, respectively. The remaining three samples correspond to a Moderately Diabetic Subject and two Severely Diabetic Subjects. 50 400 800 1200 0 5 10 15 x 10 −3 Ratio 1 Number of inducing variables (a) 50 400 800 1200 0 0.01 0.02 0.03 Ratio 2 Number of inducing variables (b) 50 400 800 1200 200 250 300 350 400 Bounds Number of inducing variables (c) 50 400 800 1200 100 150 200 250 300 350 Bounds Number of inducing variables (d) Figure 3: Figures 3(a) and 3(b) show the evolution of the estimated overdispersion measures γ1 and γ2 as functions of the number of inducing variables used. The dotted black lines correspond to the Normal/Mildly Diabetic class while the solid lines to the Moderately/Severely Diabetic class. Figure 3(c) shows the upper bound (red) and the lower bound (blue) on the log likelihood as functions of the number of inducing variables for the Normal/Mildly Diabetic class while the Moderately/Severely Diabetic case is shown in Figure 3(d). Figure 4: We illustrate the optimization over the inducing inputs Z for several values of m ∈{50, 100, 200, 400, 800, 1200} in the DPP of Section 5.2. We consider the Normal/Mildly diabetic class. The panels in the top row show the initial inducing input locations for various values of m, while the corresponding panels in the bottom row show the optimized locations. decisions which are consistent with the ideal underlying algorithms. In future work, we plan to investigate connections of our bounds with the quadrature-based bounds for Fredholm determinants of [23]. We also plan to consider variants of DPPs that condition on the number of points in the realization, to put joint priors over the within-class distributions of the features in classification problems, in a manner related to [6]. In the long term, we will investigate connections between kernels L and K that could be made without spectral knowledge, to address the issue of replacing L by K. Acknowledgments We would like to thank Adrien Hardy for useful discussions and Emily Fox for kindly providing access to the diabetic neuropathy dataset. RB was funded by a research fellowship through the 2020 Science programme, funded by EPSRC grant number EP/I017909/1, and by ANR project ANR-13-BS-03-0006-01. 8 References [1] D. J. Daley and D. Vere-Jones. An Introduction to the Theory of Point Processes. Springer, 2nd edition, 2003. [2] O. Macchi. The coincidence approach to stochastic point processes. Advances in Applied Probability, 7:83–122, 1975. [3] A. Kulesza and B. Taskar. Determinantal point processes for machine learning. Foundations and Trends in Machine Learning, 2012. [4] F. Lavancier, J. Møller, and E. Rubak. Determinantal point process models and statistical inference. Journal of the Royal Statistical Society, 2014. [5] J. B. Hough, M. Krishnapur, Y. Peres, and B. Vir´ag. Determinantal processes and independence. Probability surveys, 2006. [6] J. Y. Zou and R. P. Adams. Priors for diversity in generative latent variable models. In Advances in Neural Information Processing Systems (NIPS), 2012. [7] R. H. Affandi, E. B. Fox, R. P. Adams, and B. Taskar. Learning the parameters of determinantal point processes. In Proceedings of the International Conference on Machine Learning (ICML), 2014. [8] J. Gillenwater, A. Kulesza, E. B. Fox, and B. Taskar. Expectation-maximization for learning determinantal point processes. In Advances in Neural Information Proccessing Systems (NIPS), 2014. [9] Z. Mariet and S. Sra. Fixed-point algorithms for learning determinantal point processes. In Advances in Neural Information systems (NIPS), 2015. [10] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [11] Michalis K. Titsias. Variational Learning of Inducing Variables in Sparse Gaussian Processes. In AISTATS, volume 5, 2009. [12] N. Cristianini and J. Shawe-Taylor. Kernel methods for pattern recognition. Cambridge University Press, 2004. [13] R. H. Affandi, A. Kulesza, E. B. Fox, and B. Taskar. Nystr¨om approximation for largescale determinantal processes. In Proceedings of the conference on Artificial Intelligence and Statistics (AISTATS), 2013. [14] E. Seiler and B. Simon. An inequality among determinants. Proceedings of the National Academy of Sciences, 1975. [15] N. Hansen. The CMA evolution strategy: a comparing review. In Towards a new evolutionary computation. Advances on estimation of distribution algorithms. Springer, 2006. [16] C. P. Robert and G. Casella. Monte Carlo Statistical Methods. Springer, 2004. [17] L. Devroye. Non-uniform random variate generation. Springer-Verlag, 1986. [18] I. Gohberg, S. Goldberg, and M. A. Kaashoek. Classes of linear operators, Volume I. Springer, 1990. [19] B. Simon. Trace ideals and their applications. American Mathematical Society, 2nd edition, 2005. [20] G. E. Fasshauer and M. J. McCourt. Stable evaluation of gaussian radial basis function interpolants. SIAM Journal on Scientific Computing, 34(2), 2012. [21] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7:223–242, 2001. [22] L. A. Waller, A. S¨arkk¨a, V. Olsbo, M. Myllym¨aki, I. G. Panoutsopoulou, W. R. Kennedy, and G. Wendelschafer-Crabb. Second-order spatial analysis of epidermal nerve fibers. Statistics in Medicine, 30(23):2827–2841, 2011. [23] F. Bornemann. On the numerical evaluation of Fredholm determinants. Mathematics of Computation, 79(270):871–915, 2010. 9 | 2015 | 78 |
5,970 | A Bayesian Framework for Modeling Confidence in Perceptual Decision Making Koosha Khalvati, Rajesh P. N. Rao Department of Computer Science and Engineering University of Washington Seattle, WA 98195 {koosha, rao}@cs.washington.edu Abstract The degree of confidence in one’s choice or decision is a critical aspect of perceptual decision making. Attempts to quantify a decision maker’s confidence by measuring accuracy in a task have yielded limited success because confidence and accuracy are typically not equal. In this paper, we introduce a Bayesian framework to model confidence in perceptual decision making. We show that this model, based on partially observable Markov decision processes (POMDPs), is able to predict confidence of a decision maker based only on the data available to the experimenter. We test our model on two experiments on confidence-based decision making involving the well-known random dots motion discrimination task. In both experiments, we show that our model’s predictions closely match experimental data. Additionally, our model is also consistent with other phenomena such as the hard-easy effect in perceptual decision making. 1 Introduction The brain is faced with the persistent challenge of decision making under uncertainty due to noise in the sensory inputs and perceptual ambiguity . A mechanism for self-assessment of one’s decisions is therefore crucial for evaluating the uncertainty in one’s decisions. This kind of decision making, called perceptual decision making, and the associated self-assessment, called confidence, have received considerable attention in decision making experiments in recent years [9, 10, 12, 13]. One possible way of estimating the confidence of a decision maker is to assume that it is equal to the accuracy (or performance) on the task. However, the decision maker’s belief about the chance of success and accuracy need not be equal because the decision maker may not have access to information that the experimenter has access to [3]. For example, in the well-known task of random dots motion discrimination [18], on each trial, the experimenter knows the difficulty of the task (coherence or motion strength of the dots), but not the decision maker [3, 13]. In this case, when the data is binned based on difficulty of the task, the accuracy is not equal to decision confidence. An alternate way to estimate the subject’s confidence is to use auxiliary tasks such as post-decision wagering [15] or asking the decision maker to estimate confidence explicitly [12]. These methods however only provide an indirect window into the subject’s confidence and are not always applicable. In this paper, we explain how a model of decision making based on Partially Observable Decision Making Processes (POMDPs) [16, 5] can be used to estimate a decision maker’s confidence based on experimental data. POMDPs provide a unifying Bayesian framework for modeling several important aspects of perceptual decision making including evidence accumulation via Bayesian updates, the role of priors, costs and rewards of actions, etc. One of the advantages of the POMDP model over the other models is that it can incorporate various types of uncertainty in computing the optimal This research was supported by NSF grants EEC-1028725 and 1318733, and ONR grant N000141310817. 1 decision making strategy. Drift-diffusion and race models are able to handle uncertainty in probability updates [3] but not the costs and rewards of actions. Furthermore, these models originated as descriptive models of observed data, where as the POMDP approach is fundamentally normative, prescribing the optimal policy for any task requiring decision making under uncertainty. In addition, the POMDP model can capture the temporal dynamics of a task. Time has been shown to play a crucial role in decision making, especially in decision confidence [12, 4]. POMDPs have previously been used to model evidence accumulation and understand the role of priors [16, 5, 6]. To our knowledge, this is the first time that it is being applied to model confidence and explain experimental data on confidence-based decision making tasks. In the following sections, we introduce some basic concepts in perceptual decision making and show how a POMDP can model decision confidence. We then explore the model’s predictions for two well-known experiments in perceptual decision making involving confidence: (1) a fixed duration motion discrimination task with post-decision wagering [13], and (2) a reaction-time motion discrimination task with confidence report [12]. Our results show that the predictions of the POMDP model closely match experimental data. The model’s predictions are also consistent with the “hard-easy” phenomena in decision making, involving over-confidence in the hard trials and under-confidence in the easy ones [7]. 2 Accuracy, Belief and Confidence in Perceptual Decision Making Consider perceptual decision making tasks in which the subject has to guess the hidden state of the environment correctly to get a reward. Any guess other than the correct state usually leads to no reward. The decision maker has been trained on the task, and wants to obtain the maximum possible reward. Since the state is hidden, the decision maker must use one or more observations to estimate the state. For example, the state could be one of two biased coins, one biased toward heads and the other toward tails. On each trial, the experimenter picks one of these coins randomly and flips it. The decision maker only sees the result, heads or tails, and must guess which coin has been picked. If she guesses correctly, she gets a reward immediately. If she fails, she gets nothing. In this context, Accuracy is defined as the number of correct guesses divided by the total number of trials. In a single trial, if A represents the action (or choice) of the decision maker, and S and Z denote the state and observation respectively, then Accuracy for the choice s with observation z is the probability P(A = as|S = s, Z = z) where as represents the action of decision maker, i.e. choosing s, and s is the true state. This Accuracy can be measured by the experimenter. However, from the decision maker’s perspective, her chance of success in a trial is given by the probability of s being the correct state, given observation z: P(S = s|Z = z). We call this probability the decision maker’s belief. After choosing an action, for example as, the confidence for this choice is the probability: P(S = s|A = as, Z = z). According to Bayes theorem: P(A|S, Z)P(S|Z) = P(S|A, Z)P(A|Z). (1) As the goal of our decision maker is to maximize her reward, she picks the most probable state. This means that on observing z she picks as∗where s∗is the most probable state, i.e. s∗= arg max(P(S = s|Z = z)). Therefore, P(A|Z = z) is equal to 1 for as∗and 0 for the rest of the actions. As a result, accuracy is 1 for the most probable state and 0 for the rest. Also P(S|A, Z) is equal to P(S|Z) for the most probable state. This means that, given observation z, Accuracy is equal to the confidence on the most probable state. Also, this confidence is equal to the belief of the most probable state. As confidence cannot be defined on actions not performed, one could consider confidence on the most probable state only, implying that accuracy, confidence, and belief are all equal given observation z: X s P(A = as|S = s, Z)P(S = s|Z) = P(S = s∗|A = as∗, Z) = P(S = s∗|Z).1 (2) All of the above equalities, however, depend on the ability of the decision maker to compute P(S|Z). According to Bayes’ theorem P(S|Z) = P(Z|S)P(S)/P(Z) (P(Z) ̸= 0). If the decision maker has the perfect observation model P(Z|S), she could compute P(S|Z) by estimating P(S) and 1In the case that there are multiple states with maximum probability, Accuracy is the sum of the confidence values on those states. 2 P(Z) beforehand by counting the total number of occurrences of each state without considering any observations, and the total number of occurrences of observation z, respectively. Therefore, accuracy and confidence are equal if the decision maker has the true model for observations. Sometimes, however, the decision maker does not even have access to Z. For example, in the motion discrimination task, if the data is binned based on difficulty (i.e., motion strength), the decision maker cannot estimate P(S|difficulty) because she does not know the difficulty of each trial. As a result, accuracy and confidence are not equal. In the general case, the decision maker can utilize multiple observations over time, and perform an action on each time step. For example, in the coin toss problem, the decision maker could request a flip multiple times to gather more information. If she requests a flip two times and then guesses the state to be the coin biased toward heads, her actions would be Sample, Sample, Choose heads. She also has two observations (likely to be two Heads). In the general case, the state of the environment can also change after each action.2 In this case, the relationship between accuracy and confidence at time t after a sequence (history Ht) of actions and observations ht = a0, z1, a2, ..., zt−1, at−1, is: P(At|St, Ht)P(St|Ht) = P(St|At, Ht)P(At|Ht). (3) With the same reasoning as above, accuracy and confidence are equal if and only if the decision maker has access to all the observations and has the true model of the task. 3 The POMDP Model Partially Observable Markov Decision Processes (POMDPs) provide a mathematical framework for decision making under uncertainty in autonomous agents [8]. A POMDP is formally a tuple (S, A, Z, T, O, R, γ) with the following description: S is a finite set of states of the environment, A is a finite set of possible actions, and Z is a finite set of possible observations. T is a transition function defined as T : S × S × A →[0, 1] which determines P(s|s′, a), the probability of going from a state s′ to another state s after performing a particular action a. O is an observation function defined as O : Z × A × S →[0, 1], which determines P(z|a, s), the probability of observing z after performing an action a and ending in a particular state s. R is the reward function, defined as R : S × A →R, determining the reward received by performing an action in a particular state. γ is the discount factor, which is always between 0 and 1, and determines how much rewards in the future are discounted compared to current rewards. In a POMDP, the goal is to find a sequence of actions to maximize the expected discounted reward, Est[P∞ t=0 γtR(st, at)]. The states are not fully observable and the agent must rely on its observations to choose actions. At the time t, we have a history of actions and observations: ht = a0, z1, a1, ..., zt−1, at−1. The belief state [1] at time t is the posterior probability over states given this history and the prior probability b0 over states: bt = P(st|ht, b0). As the system is Markovian, the belief state captures the sufficient statistics for the history of states and actions [19] and it is possible to obtain bt+1 using only bt, at and zt+1: bt+1(s) ∝O(s, at, zt+1) X s′ T(s′, s, at)bt(s′). (4) Given this definition of belief, the goal of the agent is to find a sequence of actions to maximize the expected reward P∞ t=0 γtR(bt, at). The actions are picked based on the belief state, and the resulting mapping, from belief states to actions, is called a policy (denoted by π), which is a probability distribution over actions π(bt) : P(At|bt). The policy which maximizes P∞ t=0 γtR(bt, at) is called the optimal policy, π∗. It can be shown that there is always a deterministic optimal policy, allowing the agent to always choose one action for each bt [20]. As a result, we may use a function π∗: B →A where B is the space of all possible beliefs. There has been considerable progress in recent years in fast ”POMDP-solvers” which find near-optimal policies for POMDPs [14, 17, 11]. 3.1 Modeling Decision Making with POMDPs Results from experiments and theoretical models indicate that in many perceptual decision making tasks, if the previous task state is revealed, the history beyond this state does not exert a noticeable 2In traditional perceptual decision making tasks such as the random dots task, the state does not usually change. However, our model is equally applicable to this situation. 3 influence on decisions [2], suggesting that the Markov assumption and the notion of belief state is applicable to perceptual decision making. Additionally, since the POMDP model aims to maximize the expected reward, the problem of guessing the correct state in perceptual decision making can be converted to a reward maximization problem by simply setting the reward for the correct guess to 1 and the reward for all other actions to 0. The POMDP model also allows other costs in decision making to be taken into account, e.g., the cost of sampling, that the brain may utilize for metabolic or other evolutionarily-driven reasons. Finally, as there is only one correct hidden state in each trial, the policy is deterministic (choosing the most probable state), consistent with the POMDP model. All these facts mean that we could model the perceptual decision making with the POMDP framework. In the cases where all observations and the true environment model are available to the decision maker, the belief state in the POMDP is equal to both accuracy and confidence as discussed above. When some information is hidden from the decision maker, one can use a POMDP with that information to model accuracy and another POMDP without that information to model the confidence. If this hidden information is independent of time, we can model the difference with the initial belief state, b0, i.e., we use two similar POMDPs to model accuracy and confidence but with different initial belief states. In the well-known motion discrimination experiment, it is common to bin the data based on the difficulty of the task. This difficulty is hidden to the decision maker and also independent of the time. As a result, the confidence can be calculated by the same POMDP that models accuracy but with different initial belief state. This case is discussed in the next section. 4 Experiments and Results We investigate the applicability of the POMDP model in the context of two well-known tasks in perceptual decision making. The first is a fixed-duration motion discrimination task with a ”sure option,” presented in [13]. In this task, a movie of randomly moving dots is shown to a monkey for a fixed duration. After a delay period, the monkey must correctly choose the direction of motion (left or right) of the majority of the dots to obtain a reward. In half of the trials, a third choice also becomes available, the ”sure option,” which always leads to a reward, though the reward is less than the reward for guessing the direction correctly. Intuitively, if the monkey wants to maximize reward, it should go for the sure choice only when it is very uncertain about the direction of the dots. The second task is a reaction-time motion discrimination task in humans studied in [12]. In this task, the subject observes the random dots motion stimuli but must determine the direction of the motion (in this case, up or down) of the majority of the dots as fast and as accurately as possible (rather than observing for a fixed duration). In addition to their decision regarding direction, subjects indicated their confidence in their decision on a horizontal bar stimulus, where pointing nearer to the left end meant less confidence and nearer to the right end meant more confidence. In both tasks, the difficulty of the task is governed by a parameter known as ”coherence’” (or ”motion strength’”), defined as the percentage of dots moving in the same direction from frame to frame in a given trial. In the experiments, the coherence value for a given trial was chosen to be one of the following: 0.0%, 3.2%, 6.4%, 12.8%, 25.6%, 51.2%. 4.1 Fixed Duration Task as a POMDP The direction and the coherence of the moving dots comprise the states of the environment. In addition, the actions which are available to the subject are dependent on the stage of the trial, namely, random dots display, wait period, choosing the direction or the sure choice, or choosing only the direction. As a result, the stage of the trial is also a part of the state of the POMDP. As the transition between these stages are dependent on time, we incorporate discretized time as a part of the state. Considering the data, we define a new state for each constant ∆t, each direction, each coherence, and each stage (when there is intersection between stages). We use dummy states to enforce the delay period of waiting and a terminal state, which indicates termination of the trial: S = { (direction, coherence, stage, time), waiting states, terminal } The actions are Sample, Wait, Left, Right, and Sure. The transition function models the passage of time and stages. The observation function models evidence accumulation only in the random dots display stage and with the action Sample. The observations received in each ∆t are governed by the number of dots moving in the same direction. We model the observations as normally distributed 4 (a) (b) Figure 1: Experimental accuracy of the decision maker for each coherence is shown in (a). This plot is from [13]. The curves with empty circles and dashed lines are the trials where the sure option was not given to the subject. The curves with solid circles and solid lines are the trials where the sure option was shown, but waived by the decision maker. (b) shows the accuracy curves for the POMDP model fit to the experimental accuracy data from trials where the sure option was not given. around a mean related to the coherence and the direction as follows: O((d, c, display, t), Sample) = N(µd,c, σd,c). The reward for choosing the correct direction is set to 1, and the other rewards were set relative to this reward. The sure option was set to a positive reward less than 1 while the cost of sampling and receiving a new observation was set to a negative ”reward” value. To model the unavailability of some actions in some states, we set their resultant rewards to a large negative number to preclude the decision making agent from picking these actions. The discount factor models how much more immediate reward is worth relative to future rewards. In the fixed-duration task, the subject does not have the option of terminating the trial early to get reward sooner, and therefore we used a discount factor of 1 for this task. 4.2 Predicting the Confidence in the Fixed Duration Task As mentioned before, confidence and accuracy are equal to each other when the same amount of information is available to the experimenter and the decision maker. Therefore, they can be modeled by the same POMDP. However, these two are not equal when we look at a specific coherence (difficulty), i.e. the data is binned based on coherence, because the coherence in each trial is not revealed to the decision maker. Figure 1a shows the accuracy vs. stimulus duration, binned based on coherence. The confidence is not equal to the accuracy in this plot. However, we could predict the decision maker’s confidence only from accuracy data. This time, we use two POMDPs, one for the experimenter and one for the decision maker. At time t, bt of the experimenter’s POMDP can be related to accuracy and bt of the decision maker’s to confidence. These two POMDPs have the same model parameters, but different initial belief state. This is because the subject knows the environment model but does not have access to the coherence in each trial. First, we find the set of parameters for the experimenter’s POMDP to reproduce the same accuracy curves as in the experiment for each coherence. We only use data from the trials where the sure option is not given i.e. dashed curves in figure 1a. As the data is binned based on the coherence, and coherence is observable to the experimenter, the initial belief state of the experimenter’s POMDP for coherence c is as following: .5 for each of two possible initial states (at time = 0), and 0 for the rest. Fitting the POMDP to the accuracy data yields the mean and variance for each observation function and the cost for sampling. Figure 1b shows the accuracy curves based on the experimenter’s POMDP. Now, we could apply the parameters obtained from fitting accuracy data (the experimenter’s POMDP) to the decision maker’s POMDP to predict her confidence. The decision maker does not know the coherence in each single trial. Therefore, the initial belief state should be a uniform distribution over all initial states (all coherences, not only coherence of that trial). Also, neural data from experiments and post-decision wagering experiments suggest that the decision maker does not recognize the existence of a true zero coherence state (coherence = 0%) [13]. Therefore, the initial 5 (a) (b) Figure 2: The confidence predicted by the POMDP fit to the observed accuracy data in the fixedduration experiment is shown in (a). (b) shows accuracy and confidence in one plot, demonstrating that they are not equal for this task. Curves with solid lines show the confidence (same curves as (a)) and the ones with dashed lines show the accuracy (same as Figure 1b). (a) (b) Figure 3: Experimental post-decision wagering results (plot (a)) and the wagering predicted by our model (plot (b)). Plot (a) is from [13]. probability of 0 coherence states is set to 0. Figure 2a shows the POMDP predictions regarding the subject’s belief. Figure 2b confirms that the predicted confidence and accuracy are not equal. To test our prediction about the confidence of the decision maker, we use experimental data from post-decision wagering in this experiment. If the reward for the sure option is rsure then the decision maker chooses it if and only if b(right)rright < rsure and b(left)rleft < rsure where b(direction) is the sum of the belief states of all states in that direction. Since rsure cannot be obtained from the fit to the accuracy data, we choose a value for rsure which makes the prediction of the confidence consistent with the wagering data shown in Figure 3a. We found that if rsure is approximately twothirds the value of the reward for correct direction choice, the POMDP model’s prediction matches experimental data (Figure 3b). A possible objection is that the free parameter of rsure was used to fit the data. Although rsure is needed to fit the exact probabilities, we found that any reasonable value for rsure generates the same trend of wagering. In general, the effect of rsure is to shift the plots vertically. The most important phenomena here is the relatively small gap between hard trials and easy trials in Figure 3b. Figure 4a shows what this wagering data would look like if the decision maker knew the coherence in each trial and confidence was equal to the accuracy. The difference between these two plots (figure 3b and figure 4a), and figure 2b which shows the confidence and the accuracy together confirm the POMDP model’s ability to explain hard-easy effect [7], wherein the decision maker underestimates easy trials and has overconfidence in the hard ones. Another way of testing the predictions about the confidence is to verify if the POMDP predicts the correct accuracy in the trials where the decision maker waives the sure option. Figure 4b shows that the results from the POMDP closely match the experimental data both in post-decision wagering and accuracy improvement. Our methods are presented in more detail in the supplementary document. 4.3 Reaction Time Task The POMDP for the reaction time task is similar to the fixed duration task. The most important components of the state are again direction and coherence. We also need some dummy states for the 6 (a) (b) Figure 4: (a) shows what post-decision wagering would look like if the accuracy and the confidence were equal. (b) shows the accuracy predicted by the POMDP model in the trials where the sure option is shown but waived (solid lines), and also in the trials where it is not shown (dashed lines). For comparison, see experimental data in figure 1a. waiting period between the decision command from the decision maker and reward delivery. However, the passage of stages and time are not modeled. The absence of time in the state representation does not mean that the time is not modeled in the framework. Tracking time is a very important component of any POMDP especially when the discount factor is less than one. The actions for this task are Sample, Wait, Up and Down (the latter two indicating choice for the direction of motion). The transition model and the observation model are similar to those for the Fixed duration task. S = (direction, coherence), waiting states, terminal O((d, c), Sample) = N(µd,c, σd,c) The reward for choosing the correct direction is 1 and the reward for sampling is a small negative value adjusted to the reward of the correct choice. As the subject controls the termination of the trial, the discount factor is less than 1. In this task, the subjects have been explicitly advised to terminate the task as soon as they discover the direction. Therefore, there is an incentive for the subject to terminate the trial sooner. While sampling cost is constant during the experiment, the discount factor makes the decision making strategy dependent on time. A discount factor less than 1 means that as time passes, the effective value of the rewards decreases. Also, in a general reaction time task, the discount factor connects the trials to each other. While models usually assume each single trial is independent of the others, trials are actually dependent when the decision maker has control over trial termination. Specifically, the decision maker has a motivation to terminate each trial quickly to get the reward, and proceed to the next one. Moreover, when one is very uncertain about the outcome of a trial, it may be prudent to terminate the trial sooner with the expectation that the next trial may be easier. 4.4 Predicting the Confidence in the Reaction time Task Like the fixed duration task, we want to predict the decision maker’s confidence on a specific coherence. To achieve this, we use the same technique, i.e., having two POMDPs with the same model and different initial belief states. The control of the subject over the termination of the trial makes estimating the confidence more difficult in the reaction time task. As the subject decides based on her own belief, not accuracy, the relationship between the accuracy and the reaction time, binned based on difficulty is very noisy in comparison to the fixed duration task (the plots of this relationship are illustrated in the supplementary materials of [12]). Therefore we fit the experimenter’s POMDP to two other plots, reaction time vs. motion strength (coherence), and accuracy vs. motion strength (coherence). The first subject (S1) of the original experiment was picked for this analysis because the behavior of this subject was consistent with the behavior of the majority of subjects [12]. Figures 5a and 5b show the experimental data from [12]. Figure 5c and 5d show the results from the POMDP model fit to experimental data. As in the previous task, the initial belief state of the POMDP for a coherence c is .5 for each direction of c, and 0 for the rest. All the free parameters of the POMDP were extracted from this fit. Again, as in the fixed duration task, we assume that the decision maker knows the environment model, but does not know about the coherence of each trial and existence of 0% coherence. Figure 6a shows the reported confidence from the experiments and figure 6b shows the prediction of our POMDP model for the belief of the 7 (a) (b) (c) (d) Figure 5: (a) and (b) show Accuracy vs. motion strength, and reaction time vs. motion strength plots from the reaction-time random dots experiments in [12]. (c) and (d) show the results from the POMDP model. (a) (b) Figure 6: (a) illustrates the reported confidence by the human subject from [12]. (b) shows the predicted confidence by the POMDP model. decision maker. Although this report is not in percentile and quantitative comparison is not possible, the general trends in these plots are similar. The two become almost identical if one maps the report bar to the probability range. In both tasks, we assume that the decision maker has a nearly perfect model of the environment, apart from using 5 different coherences instead of 6 (the zero coherence state assumed not known). This assumption is not necessarily true. Although, the decision maker understands that the difficulty of the trials is not constant, she might not know the exact number of coherences. For example, she may divide trials into three categories: easy, normal, and hard for each direction. However, these differences do not significantly change the belief because the observations are generated by the true model, not the decision maker’s model. We tested this hypothesis in our experiments. Although using using a separate decision maker’s model makes the predictions closer to the real data, we used the true (experimenter’s) model to avoid overfitting the data. 5 Conclusions Our results present, to our knowledge, the first supporting evidence for the utility of a Bayesian reward optimization framework based on POMDPs for modeling confidence judgements in subjects engaged in perceptual decision making. We showed that the predictions of the POMDP model are consistent with results on decision confidence in both primate and human decision making tasks, encompassing fixed-duration and reaction-time paradigms. Unlike traditional descriptive models such as drift-diffusion or race models, the POMDP model is normative and is derived from Bayesian and reward optimization principles. Additionally, unlike the traditional models, it allows one to model optimal decision making across trials using the concept of a discount factor. Important directions for future research include leveraging the ability of the POMDP framework to model intra-trial probabilistic state transitions, and exploring predictions of the POMDP model for decision making experiments with more sophisticated reward/cost functions. 8 References [1] Karl J. Astrom. Optimal control of Markov decision processes with incomplete state estimation. Journal of Mathematical Analysis and Applications, pages 174–205, 1965. [2] Jan Drugowitsch, Ruben Moreno-Bote, Anne K. Churchland, Michael N. Shadlen, and Alexandre Pouget. The cost of accumulating evidence in perceptual decision making. The Journal of neuroscience, 32(11):3612–3628, 2012. [3] Jan Drugowitsch, Ruben Moreno-Bote, and Alexandre Pouget. Relation between belief and performance in perceptual decision making. PLoS ONE, 9(5):e96511, 2014. [4] Timothy D. Hanks, Mark E. Mazurek, Roozbeh Kiani, Elisabeth Hopp, and Michael N. Shadlen. Elapsed decision time affects the weighting of prior probability in a perceptual decision task. Journal of Neuroscience, 31(17):6339–6352, 2011. [5] Yanping Huang, Abram L. Friesen, Timothy D. Hanks, Michael N. Shadlen, and Rajesh P. N. Rao. How prior probability influences decision making: A unifying probabilistic model. In Proceedings of The Twenty-sixth Annual Conference on Neural Information Processing Systems (NIPS), pages 1277–1285, 2012. [6] Yanping Huang and Rajesh P. N. Rao. Reward optimization in the primate brain: A probabilistic model of decision making under uncertainty. PLoS ONE, 8(1):e53344, 01 2013. [7] Peter Juslin, Henrik Olsson, and Mats Bjorkman. Brunswikian and Thurstonian origins of bias in probability assessment: On the interpretation of stochastic components of judgment. Journal of Behavioral Decision Making, 10(3):189–209, 1997. [8] Leslie Pack Kaelbling, Michael L. Littman, and Anthony R. Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1?2):99 – 134, 1998. [9] Adam Kepecs and Zachary F Mainen. A computational framework for the study of confidence in humans and animals. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1594):1322–1337, 2012. [10] Adam Kepecs, Naoshige Uchida, Hatim A. Zariwala, and Zachary F. Mainen. Neural correlates, computation and behavioural impact of decision confidence. Nature, 455(7210):227– 231, 2012. [11] Koosha Khalvati and Alan K. Mackworth. A fast pairwise heuristic for planning under uncertainty. In Proceedings of The Twenty-Seventh AAAI Conference on Artificial Intelligence, pages 187–193, 2013. [12] Roozbeh Kiani, Leah Corthell, and Michael N. Shadlen. Choice certainty is informed by both evidence and decision time. Neuron, 84(6):1329 – 1342, 2014. [13] Roozbeh Kiani and Michael N. Shadlen. Representation of confidence associated with a decision by neurons in the parietal cortex. Science, 324(5928):759–764, 2009. [14] Hanna Kurniawati, David Hsu, and Wee Sun Lee. SARSOP: Efficient point-based POMDP planning by approximating optimally reachable belief spaces. In Proceedings of The Robotics: Science and Systems IV, 2008. [15] Navindra Persaud, Peter McLeod, and Alan Cowey. Post-decision wagering objectively measures awareness. Nature Neuroscience, 10(2):257–261, 2007. [16] Rajesh P. N. Rao. Decision making under uncertainty: a neural model based on partially observable Markov decision processes. Frontiers in computational neuroscience, 4, 2010. [17] Stphane Ross, Joelle Pineau, Sebastien Paquet, and Brahim Chaib-draa. Online planning algorithms for POMDPs. Journal of Artificial Intelligence Research, 32(1), 2008. [18] Michael N. Shadlen and William T. Newsome. Motion perception: seeing and deciding. Proceedings of the National Academy of Sciences of the United States of America, 93(2):628–633, 1996. [19] Richard D. Smallwood and Edward J. Sondik. The optimal control of partially observable markov processes over a finite horizon. Operations Research, 21(5):1071–1088, 1973. [20] Edward J. Sondik. The optimal control of partially observable markov processes over the infinite horizon: Discounted costs. Operations Research, 26(2):pp. 282–304, 1978. 9 | 2015 | 79 |
5,971 | Top-k Multiclass SVM Maksim Lapin,1 Matthias Hein2 and Bernt Schiele1 1Max Planck Institute for Informatics, Saarbrücken, Germany 2Saarland University, Saarbrücken, Germany Abstract Class ambiguity is typical in image classification problems with a large number of classes. When classes are difficult to discriminate, it makes sense to allow k guesses and evaluate classifiers based on the top-k error instead of the standard zero-one loss. We propose top-k multiclass SVM as a direct method to optimize for top-k performance. Our generalization of the well-known multiclass SVM is based on a tight convex upper bound of the top-k error. We propose a fast optimization scheme based on an efficient projection onto the top-k simplex, which is of its own interest. Experiments on five datasets show consistent improvements in top-k accuracy compared to various baselines. 1 Introduction Figure 1: Images from SUN 397 [29] illustrating class ambiguity. Top: (left to right) Park, River, Pond. Bottom: Park, Campus, Picnic area. As the number of classes increases, two important issues emerge: class overlap and multilabel nature of examples [9]. This phenomenon asks for adjustments of both the evaluation metrics as well as the loss functions employed. When a predictor is allowed k guesses and is not penalized for k −1 mistakes, such an evaluation measure is known as top-k error. We argue that this is an important metric that will inevitably receive more attention in the future as the illustration in Figure 1 indicates. How obvious is it that each row of Figure 1 shows examples of different classes? Can we imagine a human to predict correctly on the first attempt? Does it even make sense to penalize a learning system for such “mistakes”? While the problem of class ambiguity is apparent in computer vision, similar problems arise in other domains when the number of classes becomes large. We propose top-k multiclass SVM as a generalization of the well-known multiclass SVM [5]. It is based on a tight convex upper bound of the top-k zero-one loss which we call top-k hinge loss. While it turns out to be similar to a top-k version of the ranking based loss proposed by [27], we show that the top-k hinge loss is a lower bound on their version and is thus a tighter bound on the top-k zero-one loss. We propose an efficient implementation based on stochastic dual coordinate ascent (SDCA) [24]. A key ingredient in the optimization is the (biased) projection onto the top-k simplex. This projection turns out to be a tricky generalization of the continuous quadratic knapsack problem, respectively the projection onto the standard simplex. The proposed algorithm for solving it has complexity O(m log m) for x ∈Rm. Our implementation of the top-k multiclass SVM scales to large datasets like Places 205 with about 2.5 million examples and 205 classes [30]. Finally, extensive experiments on several challenging computer vision problems show that top-k multiclass SVM consistently improves in top-k error over the multiclass SVM (equivalent to our top-1 multiclass SVM), one-vs-all SVM and other methods based on different ranking losses [11, 16]. 1 2 Top-k Loss in Multiclass Classification In multiclass classification, one is given a set S = {(xi, yi) | i = 1, . . . , n} of n training examples xi ∈X along with the corresponding labels yi ∈Y. Let X = Rd be the feature space and Y = {1, . . . , m} the set of labels. The task is to learn a set of m linear predictors wy ∈Rd such that the risk of the classifier arg maxy∈Y ⟨wy, x⟩is minimized for a given loss function, which is usually chosen to be a convex upper bound of the zero-one loss. The generalization to nonlinear predictors using kernels is discussed below. The classification problem becomes extremely challenging in the presence of a large number of ambiguous classes. It is natural in that case to extend the evaluation protocol to allow k guesses, which leads to the popular top-k error and top-k accuracy performance measures. Formally, we consider a ranking of labels induced by the prediction scores ⟨wy, x⟩. Let the bracket [·] denote a permutation of labels such that [j] is the index of the j-th largest score, i.e. w[1], x ≥ w[2], x ≥. . . ≥ w[m], x . The top-k zero-one loss errk is defined as errk(f(x), y) = 1⟨w[k],x⟩>⟨wy,x⟩, where f(x) = (⟨w1, x⟩, . . . , ⟨wm, x⟩)⊤and 1P = 1 if P is true and 0 otherwise. Note that the standard zero-one loss is recovered when k = 1, and errk(f(x), y) is always 0 for k = m. Therefore, we are interested in the regime 1 ≤k < m. 2.1 Multiclass Support Vector Machine In this section we review the multiclass SVM of Crammer and Singer [5] which will be extended to the top-k multiclass SVM in the following. We mainly follow the notation of [24]. Given a training pair (xi, yi), the multiclass SVM loss on example xi is defined as max y∈Y {1y̸=yi + ⟨wy, xi⟩−⟨wyi, xi⟩}. (1) Since our optimization scheme is based on Fenchel duality, we also require a convex conjugate of the primal loss function (1). Let c ≜1−eyi, where 1 is the all ones vector and ej is the j-th standard basis vector in Rm, let a ∈Rm be defined componentwise as aj ≜⟨wj, xi⟩−⟨wyi, xi⟩, and let ∆≜{x ∈Rm | ⟨1, x⟩≤1, 0 ≤xi, i = 1, . . . , m}. Proposition 1 ([24], § 5.1). A primal-conjugate pair for the multiclass SVM loss (1) is φ(a) = max{0, (a + c)[1]}, φ∗(b) = −⟨c, b⟩ if b ∈∆, +∞ otherwise. (2) Note that thresholding with 0 in φ(a) is actually redundant as (a + c)[1] ≥(a + c)yi = 0 and is only given to enhance similarity to the top-k version defined later. 2.2 Top-k Support Vector Machine The main motivation for the top-k loss is to relax the penalty for making an error in the top-k predictions. Looking at φ in (2), a direct extension to the top-k setting would be a function ψk(a) = max{0, (a + c)[k]}, which incurs a loss iff (a + c)[k] > 0. Since the ground truth score (a + c)[yi] = 0, we conclude that ψk(a) > 0 ⇐⇒ w[1], xi ≥. . . ≥ w[k], xi > ⟨wyi, xi⟩−1, which directly corresponds to the top-k zero-one loss errk with margin 1. Note that the function ψk ignores the values of the first (k −1) scores, which could be quite large if there are highly similar classes. That would be fine in this model as long as the correct prediction is 2 within the first k guesses. However, the function ψk is unfortunately nonconvex since the function fk(x) = x[k] returning the k-th largest coordinate is nonconvex for k ≥2. Therefore, finding a globally optimal solution is computationally intractable. Instead, we propose the following convex upper bound on ψk, which we call the top-k hinge loss, φk(a) = max n 0, 1 k k X j=1 (a + c)[j] o , (3) where the sum of the k largest components is known to be convex [3]. We have that ψk(a) ≤φk(a) ≤φ1(a) = φ(a), for any k ≥1 and a ∈Rm. Moreover, φk(a) < φ(a) unless all k largest scores are the same. This extra slack can be used to increase the margin between the current and the (m −k) remaining least similar classes, which should then lead to an improvement in the top-k metric. 2.2.1 Top-k Simplex and Convex Conjugate of the Top-k Hinge Loss In this section we derive the conjugate of the proposed loss (3). We begin with a well known result that is used later in the proof. All proofs can be found in the supplement. Let [a]+ = max{0, a}. Lemma 1 ([17], Lemma 1). Pk j=1 h[j] = mint kt + Pm j=1[hj −t]+ . 1/3 1/2 1 1 1/2 1/3 0 1 1/2 1/3 0 Top-1 Top-2 Top-3 Figure 2: Top-k simplex ∆k(1) for m = 3. Unlike the standard simplex, it has m k + 1 vertices. We also define a set ∆k which arises naturally as the effective domain1 of the conjugate of (3). By analogy, we call it the top-k simplex as for k = 1 it reduces to the standard simplex with the inequality constraint (i.e. 0 ∈∆k). Let [m] ≜1, . . . , m. Definition 1. The top-k simplex is a convex polytope defined as ∆k(r) ≜ x ⟨1, x⟩≤r, 0 ≤xi ≤1 k ⟨1, x⟩, i ∈[m] , where r ≥0 is the bound on the sum ⟨1, x⟩. We let ∆k ≜∆k(1). The crucial difference to the standard simplex is the upper bound on xi’s, which limits their maximal contribution to the total sum ⟨1, x⟩. See Figure 2 for an illustration. The first technical contribution of this work is as follows. Proposition 2. A primal-conjugate pair for the top-k hinge loss (3) is given as follows: φk(a) = max n 0, 1 k k X j=1 (a + c)[j] o , φ∗ k(b) = −⟨c, b⟩ if b ∈∆k, +∞ otherwise. (4) Moreover, φk(a) = max{⟨a + c, λ⟩| λ ∈∆k}. Therefore, we see that the proposed formulation (3) naturally extends the multiclass SVM of Crammer and Singer [5], which is recovered when k = 1. We have also obtained an interesting extension (or rather contraction, since ∆k ⊂∆) of the standard simplex. 2.3 Relation of the Top-k Hinge Loss to Ranking Based Losses Usunier et al. [27] have recently formulated a very general family of convex losses for ranking and multiclass classification. In their framework, the hinge loss on example xi can be written as Lβ(a) = m X y=1 βy max{0, (a + c)[y]}, 1 A convex function f : X →R ∪{±∞} has an effective domain dom f = {x ∈X | f(x) < +∞}. 3 where β1 ≥. . . ≥βm ≥0 is a non-increasing sequence of non-negative numbers which act as weights for the ordered losses. The relation to the top-k hinge loss becomes apparent if we choose βj = 1 k if j ≤k, and 0 otherwise. In that case, we obtain another version of the top-k hinge loss ˜φk a = 1 k k X j=1 max{0, (a + c)[j]}. (5) It is straightforward to check that ψk(a) ≤φk(a) ≤˜φk(a) ≤φ1(a) = ˜φ1(a) = φ(a). The bound φk(a) ≤˜φk(a) holds with equality if (a + c)[1] ≤0 or (a + c)[k] ≥0. Otherwise, there is a gap and our top-k loss is a strictly better upper bound on the actual top-k zero-one loss. We perform extensive evaluation and comparison of both versions of the top-k hinge loss in § 5. While [27] employed LaRank [1] and [9], [28] optimized an approximation of Lβ(a), we show in the supplement how the loss function (5) can be optimized exactly and efficiently within the ProxSDCA framework. Multiclass to binary reduction. It is also possible to compare directly to ranking based methods that solve a binary problem using the following reduction. We employ it in our experiments to evaluate the ranking based methods SVMPerf [11] and TopPush [16]. The trick is to augment the training set by embedding each xi ∈Rd into Rmd using a feature map Φy for each y ∈Y. The mapping Φy places xi at the y-th position in Rmd and puts zeros everywhere else. The example Φyi(xi) is labeled +1 and all Φy(xi) for y ̸= yi are labeled −1. Therefore, we have a new training set with mn examples and md dimensional (sparse) features. Moreover, ⟨w, Φy(xi)⟩= ⟨wy, xi⟩ which establishes the relation to the original multiclass problem. Another approach to general performance measures is given in [11]. It turns out that using the above reduction, one can show that under certain constraints on the classifier, the recall@k is equivalent to the top-k error. A convex upper bound on recall@k is then optimized in [11] via structured SVM. As their convex upper bound on the recall@k is not decomposable in an instance based loss, it is not directly comparable to our loss. While being theoretically very elegant, the approach of [11] does not scale to very large datasets. 3 Optimization Framework We begin with a general ℓ2-regularized multiclass classification problem, where for notational convenience we keep the loss function unspecified. The multiclass SVM or the top-k multiclass SVM are obtained by plugging in the corresponding loss function from § 2. 3.1 Fenchel Duality for ℓ2-Regularized Multiclass Classification Problems Let X ∈Rd×n be the matrix of training examples xi ∈Rd, let W ∈Rd×m be the matrix of primal variables obtained by stacking the vectors wy ∈Rd, and A ∈Rm×n the matrix of dual variables. Before we prove our main result of this section (Theorem 1), we first impose a technical constraint on a loss function to be compatible with the choice of the ground truth coordinate. The top-k hinge loss from Section 2 satisfies this requirement as we show in Proposition 3. We also prove an auxiliary Lemma 2, which is then used in Theorem 1. Definition 2. A convex function φ is j-compatible if for any y ∈Rm with yj = 0 we have that sup{⟨y, x⟩−φ(x) | xj = 0} = φ∗(y). This constraint is needed to prove equality in the following Lemma. Lemma 2. Let φ be j-compatible, let Hj = I −1e⊤ j , and let Φ(x) = φ(Hjx), then Φ∗(y) = φ∗(y −yjej) if ⟨1, y⟩= 0, +∞ otherwise. 4 We can now use Lemma 2 to compute convex conjugates of the loss functions. Theorem 1. Let φi be yi-compatible for each i ∈[n], let λ > 0 be a regularization parameter, and let K = X⊤X be the Gram matrix. The primal and Fenchel dual objective functions are given as: P(W) = + 1 n n X i=1 φi W ⊤xi −⟨wyi, xi⟩1 + λ 2 tr W ⊤W , D(A) = −1 n n X i=1 φ∗ i (−λn(ai −ayi,ieyi)) −λ 2 tr AKA⊤ , if ⟨1, ai⟩= 0 ∀i, +∞otherwise. Moreover, we have that W = XA⊤and W ⊤xi = AKi, where Ki is the i-th column of K. Finally, we show that Theorem 1 applies to the loss functions that we consider. Proposition 3. The top-k hinge loss function from Section 2 is yi-compatible. We have repeated the derivation from Section 5.7 in [24] as there is a typo in the optimization problem (20) leading to the conclusion that ayi,i must be 0 at the optimum. Lemma 2 fixes this by making the requirement ayi,i = −P j̸=yi aj,i explicit. Note that this modification is already mentioned in their pseudo-code for Prox-SDCA. 3.2 Optimization of Top-k Multiclass SVM via Prox-SDCA Algorithm 1 Top-k Multiclass SVM 1: Input: training data {(xi, yi)n i=1}, parameters k (loss), λ (regularization), ϵ (stopping cond.) 2: Output: W ∈Rd×m, A ∈Rm×n 3: Initialize: W ←0, A ←0 4: repeat 5: randomly permute training data 6: for i = 1 to n do 7: si ←W ⊤xi {prediction scores} 8: aold i ←ai {cache previous values} 9: ai ←update(k, λ, ∥xi∥2 , yi, si, ai) {see § 3.2.1 for details} 10: W ←W + xi(ai −aold i )⊤ {rank-1 update} 11: end for 12: until relative duality gap is below ϵ As an optimization scheme, we employ the proximal stochastic dual coordinate ascent (Prox-SDCA) framework of Shalev-Shwartz and Zhang [24], which has strong convergence guarantees and is easy to adapt to our problem. In particular, we iteratively update a batch ai ∈Rm of dual variables corresponding to the training pair (xi, yi), so as to maximize the dual objective D(A) from Theorem 1. We also maintain the primal variables W = XA⊤and stop when the relative duality gap is below ϵ. This procedure is summarized in Algorithm 1. Let us make a few comments on the advantages of the proposed method. First, apart from the update step which we discuss below, all main operations can be computed using a BLAS library, which makes the overall implementation efficient. Second, the update step in Line 9 is optimal in the sense that it yields maximal dual objective increase jointly over m variables. This is opposed to SGD updates with data-independent step sizes, as well as to maximal but scalar updates in other SDCA variants. Finally, we have a well-defined stopping criterion as we can compute the duality gap (see discussion in [2]). The latter is especially attractive if there is a time budget for learning. The algorithm can also be easily kernelized since W ⊤xi = AKi (cf. Theorem 1). 3.2.1 Dual Variables Update For the proposed top-k hinge loss from Section 2, optimization of the dual objective D(A) over ai ∈Rm given other variables fixed is an instance of a regularized (biased) projection problem onto the top-k simplex ∆k( 1 λn). Let a\j be obtained by removing the j-th coordinate from vector a. Proposition 4. The following two problems are equivalent with a\yi i = −x and ayi,i = ⟨1, x⟩ max ai {D(A) | ⟨1, ai⟩= 0} ≡min x {∥b −x∥2 + ρ ⟨1, x⟩2 | x ∈∆k( 1 λn)}, where b = 1 ⟨xi,xi⟩ q\yi + (1 −qyi)1 , q = W ⊤xi −⟨xi, xi⟩ai and ρ = 1. We discuss in the following section how to project onto the set ∆k( 1 λn) efficiently. 5 4 Efficient Projection onto the Top-k Simplex One of our main technical results is an algorithm for efficiently computing projections onto ∆k(r), respectively the biased projection introduced in Proposition 4. The optimization problem in Proposition 4 reduces to the Euclidean projection onto ∆k(r) for ρ = 0, and for ρ > 0 it biases the solution to be orthogonal to 1. Let us highlight that ∆k(r) is substantially different from the standard simplex and none of the existing methods can be used as we discuss below. 4.1 Continuous Quadratic Knapsack Problem Finding the Euclidean projection onto the simplex is an instance of the general optimization problem minx{∥a −x∥2 2 | ⟨b, x⟩≤r, l ≤xi ≤u} known as the continuous quadratic knapsack problem (CQKP). For example, to project onto the simplex we set b = 1, l = 0 and r = u = 1. This is a well examined problem and several highly efficient algorithms are available (see the surveys [18, 19]). The first main difference to our set is the upper bound on the xi’s. All existing algorithms expect that u is fixed, which allows them to consider decompositions minxi{(ai −xi)2 | l ≤xi ≤u} which can be solved in closed-form. In our case, the upper bound 1 k ⟨1, x⟩introduces coupling across all variables, which makes the existing algorithms not applicable. A second main difference is the bias term ρ ⟨1, x⟩2 added to the objective. The additional difficulty introduced by this term is relatively minor. Thus we solve the problem for general ρ (including ρ = 0 for the Euclidean projection onto ∆k(r)) even though we need only ρ = 1 in Proposition 4. The only case when our problem reduces to CQKP is when the constraint ⟨1, x⟩≤r is satisfied with equality. In that case we can let u = r/k and use any algorithm for the knapsack problem. We choose [13] since it is easy to implement, does not require sorting, and scales linearly in practice. The bias in the projection problem reduces to a constant ρr2 in this case and has, therefore, no effect. 4.2 Projection onto the Top-k Cone When the constraint ⟨1, x⟩≤r is not satisfied with equality at the optimum, it has essentially no influence on the projection problem and can be removed. In that case we are left with the problem of the (biased) projection onto the top-k cone which we address with the following lemma. Lemma 3. Let x∗∈Rd be the solution to the following optimization problem min x {∥a −x∥2 + ρ ⟨1, x⟩2 | 0 ≤xi ≤1 k ⟨1, x⟩, i ∈[d]}, and let U ≜{i | x∗ i = 1 k ⟨1, x∗⟩}, M ≜{i | 0 < x∗ i < 1 k ⟨1, x∗⟩}, L ≜{i | x∗ i = 0}. 1. If U = ∅and M = ∅, then x∗= 0. 2. If U ̸= ∅and M = ∅, then U = {[1], . . . , [k]}, x∗ i = 1 k+ρk2 Pk i=1 a[i] for i ∈U, where [i] is the index of the i-th largest component in a. 3. Otherwise (M ̸= ∅), the following system of linear equations holds u = |M| P i∈U ai + (k −|U|) P i∈M ai /D, t′ = |U| (1 + ρk) P i∈M ai −(k −|U| + ρk |M|) P i∈U ai /D, D = (k −|U|)2 + (|U| + ρk2) |M| , (6) together with the feasibility constraints on t ≜t′ + ρuk max i∈L ai ≤t ≤min i∈M ai, max i∈M ai ≤t + u ≤min i∈U ai, (7) and we have x∗= min{max{0, a −t}, u}. We now show how to check if the (biased) projection is 0. For the standard simplex, where the cone is the positive orthant Rd +, the projection is 0 when all ai ≤0. It is slightly more involved for ∆k. Lemma 4. The biased projection x∗onto the top-k cone is zero if Pk i=1 a[i] ≤0 (sufficient condition). If ρ = 0 this is also necessary. 6 Projection. Lemmas 3 and 4 suggest a simple algorithm for the (biased) projection onto the topk cone. First, we check if the projection is constant (cases 1 and 2 in Lemma 3). In case 2, we compute x and check if it is compatible with the corresponding sets U, M, L. In the general case 3, we suggest a simple exhaustive search strategy. We sort a and loop over the feasible partitions U, M, L until we find a solution to (6) that satisfies (7). Since we know that 0 ≤|U| < k and k ≤|U|+|M| ≤d, we can limit the search to (k −1)(d−k +1) iterations in the worst case, where each iteration requires a constant number of operations. For the biased projection, we leave x = 0 as the fallback case as Lemma 4 gives only a sufficient condition. This yields a runtime complexity of O(d log(d) + kd), which is comparable to simplex projection algorithms based on sorting. 4.3 Projection onto the Top-k Simplex As we argued in § 4.1, the (biased) projection onto the top-k simplex becomes either the knapsack problem or the (biased) projection onto the top-k cone depending on the constraint ⟨1, x⟩≤r at the optimum. The following Lemma provides a way to check which of the two cases apply. Lemma 5. Let x∗∈Rd be the solution to the following optimization problem min x {∥a −x∥2 + ρ ⟨1, x⟩2 | ⟨1, x⟩≤r, 0 ≤xi ≤1 k ⟨1, x⟩, i ∈[d]}, let (t, u) be the optimal thresholds such that x∗= min{max{0, a −t}, u}, and let U be defined as in Lemma 3. Then it must hold that λ = t + p k −ρr ≥0, where p = P i∈U ai −|U| (t + u). Projection. We can now use Lemma 5 to compute the (biased) projection onto ∆k(r) as follows. First, we check the special cases of zero and constant projections, as we did before. If that fails, we proceed with the knapsack problem since it is faster to solve. Having the thresholds (t, u) and the partitioning into the sets U, M, L, we compute the value of λ as given in Lemma 5. If λ ≥0, we are done. Otherwise, we know that ⟨1, x⟩< r and go directly to the general case 3 in Lemma 3. 5 Experimental Results We have two main goals in the experiments. First, we show that the (biased) projection onto the top-k simplex is scalable and comparable to an efficient algorithm [13] for the simplex projection (see the supplement). Second, we show that the top-k multiclass SVM using both versions of the top-k hinge loss (3) and (5), denoted top-k SVMα and top-k SVMβ respectively, leads to improvements in top-k accuracy consistently over all datasets and choices of k. In particular, we note improvements compared to the multiclass SVM of Crammer and Singer [5], which corresponds to top-1 SVMα/top-1 SVMβ. We release our implementation of the projection procedures and both SDCA solvers as a C++ library2 with a Matlab interface. 5.1 Image Classification Experiments We evaluate our method on five image classification datasets of different scale and complexity: Caltech 101 Silhouettes [26] (m = 101, n = 4100), MIT Indoor 67 [20] (m = 67, n = 5354), SUN 397 [29] (m = 397, n = 19850), Places 205 [30] (m = 205, n = 2448873), and ImageNet 2012 [22] (m = 1000, n = 1281167). For Caltech, d = 784, and for the others d = 4096. The results on the two large scale datasets are in the supplement. We cross-validate hyper-parameters in the range 10−5 to 103, extending it when the optimal value is at the boundary. We use LibLinear [7] for SVMOVA, SVMPerf [11] with the corresponding loss function for Recall@k, and the code provided by [16] for TopPush. When a ranking method like Recall@k and TopPush does not scale to a particular dataset using the reduction of the multiclass to a binary problem discussed in § 2.3, we use the one-vs-all version of the corresponding method. We implemented Wsabie++ (denoted W++, Q/m) based on the pseudo-code from Table 3 in [9]. On Caltech 101, we use features provided by [26]. For the other datasets, we extract CNN features of a pre-trained CNN (fc7 layer after ReLU). For the scene recognition datasets, we use the Places 205 CNN [30] and for ILSVRC 2012 we use the Caffe reference model [10]. 2https://github.com/mlapin/libsdca 7 Caltech 101 Silhouettes MIT Indoor 67 Method Top-1 Top-2 Top-3 Top-4 Top-5 Top-10 Method Top-1 Method Top-1 Method Top-1 Top-1 [26] 62.1 79.6 83.1 BLH [4] 48.3 DGE [6] 66.87 RAS [21] 69.0 Top-2 [26] 61.4 79.2 83.4 SP [25] 51.4 ZLX [30] 68.24 KL [14] 70.1 Top-5 [26] 60.2 78.7 83.4 JVJ [12] 63.10 GWG [8] 68.88 Method Top-1 Top-2 Top-3 Top-4 Top-5 Top-10 Top-1 Top-2 Top-3 Top-4 Top-5 Top-10 SVMOVA 61.81 73.13 76.25 77.76 78.89 83.57 71.72 81.49 84.93 86.49 87.39 90.45 TopPush 63.11 75.16 78.46 80.19 81.97 86.95 70.52 83.13 86.94 90.00 91.64 95.90 Recall@1 61.55 73.13 77.03 79.41 80.97 85.18 71.57 83.06 87.69 90.45 92.24 96.19 Recall@5 61.60 72.87 76.51 78.76 80.54 84.74 71.49 81.49 85.45 87.24 88.21 92.01 Recall@10 61.51 72.95 76.46 78.72 80.54 84.92 71.42 81.49 85.52 87.24 88.28 92.16 W++, 0/256 62.68 76.33 79.41 81.71 83.18 88.95 70.07 84.10 89.48 92.46 94.48 97.91 W++, 1/256 59.25 65.63 69.22 71.09 72.95 79.71 68.13 81.49 86.64 89.63 91.42 95.45 W++, 2/256 55.09 61.81 66.02 68.88 70.61 76.59 64.63 78.43 84.18 88.13 89.93 94.55 top-1 SVMα 62.81 74.60 77.76 80.02 81.97 86.91 73.96 85.22 89.25 91.94 93.43 96.94 top-10 SVMα 62.98 77.33 80.49 82.66 84.57 89.55 70.00 85.45 90.00 93.13 94.63 97.76 top-20 SVMα 59.21 75.64 80.88 83.49 85.39 90.33 65.90 84.10 89.93 92.69 94.25 97.54 top-1 SVMβ 62.81 74.60 77.76 80.02 81.97 86.91 73.96 85.22 89.25 91.94 93.43 96.94 top-10 SVMβ 64.02 77.11 80.49 83.01 84.87 89.42 71.87 85.30 90.45 93.36 94.40 97.76 top-20 SVMβ 63.37 77.24 81.06 83.31 85.18 90.03 71.94 85.30 90.07 92.46 94.33 97.39 SUN 397 (10 splits) Top-1 accuracy XHE [29] 38.0 LSH [15] 49.48 ± 0.3 ZLX [30] 54.32 ± 0.1 SPM [23] 47.2 ± 0.2 GWG [8] 51.98 KL [14] 54.65 ± 0.2 Method Top-1 Top-2 Top-3 Top-4 Top-5 Top-10 SVMOVA 55.23 ± 0.6 66.23 ± 0.6 70.81 ± 0.4 73.30 ± 0.2 74.93 ± 0.2 79.00 ± 0.3 TopPushOVA 53.53 ± 0.3 65.39 ± 0.3 71.46 ± 0.2 75.25 ± 0.1 77.95 ± 0.2 85.15 ± 0.3 Recall@1OVA 52.95 ± 0.2 65.49 ± 0.2 71.86 ± 0.2 75.88 ± 0.2 78.72 ± 0.2 86.03 ± 0.2 Recall@5OVA 50.72 ± 0.2 64.74 ± 0.3 70.75 ± 0.3 74.02 ± 0.3 76.06 ± 0.3 80.66 ± 0.2 Recall@10OVA 50.92 ± 0.2 64.94 ± 0.2 70.95 ± 0.2 74.14 ± 0.2 76.21 ± 0.2 80.68 ± 0.2 top-1 SVMα 58.16 ± 0.2 71.66 ± 0.2 78.22 ± 0.1 82.29 ± 0.2 84.98 ± 0.2 91.48 ± 0.2 top-10 SVMα 58.00 ± 0.2 73.65 ± 0.1 80.80 ± 0.1 84.81 ± 0.2 87.45 ± 0.2 93.40 ± 0.2 top-20 SVMα 55.98 ± 0.3 72.51 ± 0.2 80.22 ± 0.2 84.54 ± 0.2 87.37 ± 0.2 93.62 ± 0.2 top-1 SVMβ 58.16 ± 0.2 71.66 ± 0.2 78.22 ± 0.1 82.29 ± 0.2 84.98 ± 0.2 91.48 ± 0.2 top-10 SVMβ 59.32 ± 0.1 74.13 ± 0.2 80.91 ± 0.2 84.92 ± 0.2 87.49 ± 0.2 93.36 ± 0.2 top-20 SVMβ 58.65 ± 0.2 73.96 ± 0.2 80.95 ± 0.2 85.05 ± 0.2 87.70 ± 0.2 93.64 ± 0.2 Table 1: Top-k accuracy (%). Top section: State of the art. Middle section: Baseline methods. Bottom section: Top-k SVMs: top-k SVMα – with the loss (3); top-k SVMβ – with the loss (5). Experimental results are given in Table 1. First, we note that our method is scalable to large datasets with millions of training examples, such as Places 205 and ILSVRC 2012 (results in the supplement). Second, we observe that optimizing the top-k hinge loss (both versions) yields consistently better top-k performance. This might come at the cost of a decreased top-1 accuracy (e.g. on MIT Indoor 67), but, interestingly, may also result in a noticeable increase in the top-1 accuracy on larger datasets like Caltech 101 Silhouettes and SUN 397. This resonates with our argumentation that optimizing for top-k is often more appropriate for datasets with a large number of classes. Overall, we get systematic increase in top-k accuracy over all datasets that we examined. For example, we get the following improvements in top-5 accuracy with our top-10 SVMα compared to top-1 SVMα: +2.6% on Caltech 101, +1.2% on MIT Indoor 67, and +2.5% on SUN 397. 6 Conclusion We demonstrated scalability and effectiveness of the proposed top-k multiclass SVM on five image recognition datasets leading to consistent improvements in top-k performance. In the future, one could study if the top-k hinge loss (3) can be generalized to the family of ranking losses [27]. Similar to the top-k loss, this could lead to tighter convex upper bounds on the corresponding discrete losses. 8 References [1] A. Bordes, L. Bottou, P. Gallinari, and J. Weston. Solving multiclass support vector machines with LaRank. In ICML, pages 89–96, 2007. [2] O. Bousquet and L. Bottou. The tradeoffs of large scale learning. In NIPS, pages 161–168, 2008. [3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [4] S. Bu, Z. Liu, J. Han, and J. Wu. Superpixel segmentation based structural scene recognition. In MM, pages 681–684. ACM, 2013. [5] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265–292, 2001. [6] C. Doersch, A. Gupta, and A. A. Efros. Mid-level visual element discovery as discriminative mode seeking. In NIPS, pages 494–502, 2013. [7] R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008. [8] Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In ECCV, 2014. [9] M. R. Gupta, S. Bengio, and J. Weston. Training highly multiclass classifiers. JMLR, 15:1461–1492, 2014. [10] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. Girshick, S. Guadarrama, and T. Darrell. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093, 2014. [11] T. Joachims. A support vector method for multivariate performance measures. In ICML, pages 377–384, 2005. [12] M. Juneja, A. Vedaldi, C. Jawahar, and A. Zisserman. Blocks that shout: distinctive parts for scene classification. In CVPR, 2013. [13] K. Kiwiel. Variable fixing algorithms for the continuous quadratic knapsack problem. Journal of Optimization Theory and Applications, 136(3):445–458, 2008. [14] M. Koskela and J. Laaksonen. Convolutional network features for scene recognition. In Proceedings of the ACM International Conference on Multimedia, pages 1169–1172. ACM, 2014. [15] M. Lapin, B. Schiele, and M. Hein. Scalable multitask representation learning for scene classification. In CVPR, 2014. [16] N. Li, R. Jin, and Z.-H. Zhou. Top rank optimization in linear time. In NIPS, pages 1502–1510, 2014. [17] W. Ogryczak and A. Tamir. Minimizing the sum of the k largest functions in linear time. Information Processing Letters, 85(3):117–122, 2003. [18] M. Patriksson. A survey on the continuous nonlinear resource allocation problem. European Journal of Operational Research, 185(1):1–46, 2008. [19] M. Patriksson and C. Strömberg. Algorithms for the continuous nonlinear resource allocation problem – new implementations and numerical studies. European Journal of Operational Research, 243(3):703– 722, 2015. [20] A. Quattoni and A. Torralba. Recognizing indoor scenes. In CVPR, 2009. [21] A. S. Razavian, H. Azizpour, J. Sullivan, and S. Carlsson. Cnn features off-the-shelf: an astounding baseline for recognition. arXiv preprint arXiv:1403.6382, 2014. [22] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge, 2014. [23] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the Fisher vector: theory and practice. IJCV, pages 1–24, 2013. [24] S. Shalev-Shwartz and T. Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, pages 1–41, 2014. [25] J. Sun and J. Ponce. Learning discriminative part detectors for image classification and cosegmentation. In ICCV, pages 3400–3407, 2013. [26] K. Swersky, B. J. Frey, D. Tarlow, R. S. Zemel, and R. P. Adams. Probabilistic n-choose-k models for classification and ranking. In NIPS, pages 3050–3058, 2012. [27] N. Usunier, D. Buffoni, and P. Gallinari. Ranking with ordered weighted pairwise classification. In ICML, pages 1057–1064, 2009. [28] J. Weston, S. Bengio, and N. Usunier. Wsabie: scaling up to large vocabulary image annotation. IJCAI, pages 2764–2770, 2011. [29] J. Xiao, J. Hays, K. A. Ehinger, A. Oliva, and A. Torralba. SUN database: Large-scale scene recognition from abbey to zoo. In CVPR, 2010. [30] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning deep features for scene recognition using places database. In NIPS, 2014. 9 | 2015 | 8 |
5,972 | Sample Complexity of Episodic Fixed-Horizon Reinforcement Learning Christoph Dann Machine Learning Department Carnegie Mellon University cdann@cdann.net Emma Brunskill Computer Science Department Carnegie Mellon University ebrun@cs.cmu.edu Abstract Recently, there has been significant progress in understanding reinforcement learning in discounted infinite-horizon Markov decision processes (MDPs) by deriving tight sample complexity bounds. However, in many real-world applications, an interactive learning agent operates for a fixed or bounded period of time, for example tutoring students for exams or handling customer service requests. Such scenarios can often be better treated as episodic fixed-horizon MDPs, for which only looser bounds on the sample complexity exist. A natural notion of sample complexity in this setting is the number of episodes required to guarantee a certain performance with high probability (PAC guarantee). In this paper, we derive an upper PAC bound ˜O( |S|2|A|H2 ϵ2 ln 1 δ ) and a lower PAC bound ˜Ω( |S||A|H2 ϵ2 ln 1 δ+c) that match up to log-terms and an additional linear dependency on the number of states |S|. The lower bound is the first of its kind for this setting. Our upper bound leverages Bernstein’s inequality to improve on previous bounds for episodic finitehorizon MDPs which have a time-horizon dependency of at least H3. 1 Introduction and Motivation Consider test preparation software that tutors students for a national advanced placement exam taken at the end of a year, or maximizing business revenue by the end of each quarter. Each individual task instance requires making a sequence of decisions for a fixed number of steps H (e.g., tutoring one student to take an exam in spring 2015 or maximizing revenue for the end of the second quarter of 2014). Therefore, they can be viewed as a finite-horizon sequential decision making under uncertainty problem, in contrast to an infinite horizon setting in which the number of time steps is infinite. When the domain parameters (e.g. Markov decision process parameters) are not known in advance, and there is the opportunity to repeat the task many times (teaching a new student for each year’s exam, maximizing revenue for each new quarter), this can be treated as episodic fixed-horizon reinforcement learning (RL). One important question is to understand how much experience is required to act well in this setting. We formalize this as the sample complexity of reinforcement learning [1], which is the number of time steps on which the algorithm may select an action whose value is not near-optimal. RL algorithms with a sample complexity that is a polynomial function of the domain parameters are referred to as Probably Approximately Correct (PAC) [2, 3, 4, 1]. Though there has been significant work on PAC RL algorithms for the infinite horizon setting, there has been relatively little work on the finite horizon scenario. In this paper we present the first, to our knowledge, lower bound, and a new upper bound on the sample complexity of episodic finite horizon PAC reinforcement learning in discrete state-action spaces. Our bounds are tight up to log-factors in the time horizon H, the accuracy ϵ, the number of actions |A| and up to an additive constant in the failure probability δ. These bounds improve upon existing results by a factor of at least H. Our results also apply when the reward model is a function of the within-episode time step in addition to the state and action space. While we assume a stationary transition model, our results can be extended readily to time-dependent state1 transitions. Our proposed UCFH (Upper-confidence fixed-horizon RL) algorithm that achieves our upper PAC guarantee can be applied directly to wide range of fixed-horizon episodic MDPs with known rewards.1 It does not require additional structure such as assuming access to a generative model [8] or that the state transitions are sparse or acyclic [6]. The limited prior research on upper bound PAC results for finite horizon MDPs has focused on different settings, such as partitioning a longer trajectory into fixed length segments [4, 1], or considering a sliding time window [9]. The tightest dependence on the horizon in terms of the number of episodes presented in these approaches is at least H3 whereas our dependence is only H2. More importantly, such alternative settings require the optimal policy to be stationary, whereas in general in finite horizon settings the optimal policy is nonstationary (e.g. is a function of both the state and the within episode time-step).2 Fiechter [10, 11] and Reveliotis and Bountourelis [12] do tackle a closely related setting, but find a dependence that is at least H4. Our work builds on recent work [6, 8] on PAC infinite horizon discounted RL that offers much tighter upper and lower sample complexity bounds than was previously known. To use an infinite horizon algorithm in a finite horizon setting, a simple change is to augment the state space by the time step (ranging over 1, . . . , H), which enables the learned policy to be non-stationary in the original state space (or equivalently, stationary in the newly augmented space). Unfortunately, since these recent bounds are in general a quadratic function of the state space size, the proposed state space expansion would introduce at least an additional H2 factor in the sample complexity term, yielding at least a H4 dependence in the number of episodes for the sample complexity. Somewhat surprisingly, we prove an upper bound on the sample complexity for the finite horizon case that only scales quadratically with the horizon. A key part of our proof is that the variance of the value function in the finite horizon setting satisfies a Bellman equation. We also leverage recent insights that state–action pairs can be estimated to different precisions depending on the frequency to which they are visited under a policy, extending these ideas to also handle when the policy followed is nonstationary. Our lower bound analysis is quite different than some prior infinite-horizon results, and involves a construction of parallel multi-armed bandits where it is required that the best arm in a certain portion of the bandits is identified with high probability to achieve near-optimality. 2 Problem Setting and Notation We consider episodic fixed-horizon MDPs, which can be formalized as a tuple M = (S, A, r, p, p0, H). Both, the statespace S and the actionspace A are finite sets. The learning agent interacts with the MDP in episodes of H time steps. At time t = 1 . . . H, the agent observes a state st and choses an action at based on a policy π that potentially depends on the within-episode time step, i.e., at = πt(st) for t = 1, . . . , H. The next state is sampled from the stationary transition kernel st+1 ∼p(·|st, at) and the initial state from s1 ∼p0. In addition the agent receives a reward drawn from a distribution3 with mean rt(st) determined by the reward function. The reward function r is possibly time-dependent and takes values in [0, 1]. The quality of a policy π is evaluated by the total expected reward of an episode Rπ M = E hPH t=1 rt(st) i . For simplicity,1 we assume that the reward function r is known to the agent but the transition kernel p is unknown. The question we study is how many episodes does a learning agent follow a policy π that is not ϵ-optimal, i.e., R∗ M −ϵ > Rπ M, with probability at least 1 −δ for any chosen accuracy ϵ and failure probability δ. Notation. In the following sections, we reason about the true MDP M, an empirical MDP ˆ M and an optimistic MDP ˜ M which are identical except for their transition probabilities p, ˆp and ˜pt. We will provide more details about these MDPs later. We introduce the notation explicitly only for M but the quantities carry over to ˜ M and ˆ M with additional tildes or hats by replacing p with ˜pt or ˆp. 1 Previous works [5] have shown that the complexity of learning state transitions usually dominates learning reward functions. We therefore follow existing sample complexity analyses [6, 7] and assume known rewards for simplicity. The algorithm and PAC bound can be extended readily to the case of unknown reward functions. 2The best action will generally depend on the state and the number of remaining time steps. In the tutoring example, even if the student has the same state of knowledge, the optimal tutor decision may be to space practice if there is many days till the test and provide intensive short-term practice if the test is tomorrow. 3It is straightforward to have the reward depend on the state, or state/action or state/action/next state. 2 The (linear) operator P π i f(s) := E[f(si+1)|si = s] = P s′∈S p(s′|s, πi(s))f(s′) takes any function f : S →R and returns the expected value of f with respect to the next time step.4 For convenience, we define the multi-step version as P π i:jf := P π i P π i+1 . . . P π j f. The value function from time i to time j is defined as V π i:j(s) := E hPj t=i rt(st)|si = s i = Pj t=i P π i:t−1rt = P π i V π i+1:j (s) + ri(s) and V ∗ i:j is the optimal value-function. When the policy is clear, we omit the superscript π. We denote by S(s, a) ⊆S the set of possible successor states of state s and action a. The maximum number of them is denoted by C = maxs,a∈S×A |S(s, a)|. In general, without making further assumptions, we have C = |S|, though in many practical domains (robotics, user modeling) each state can only transition to a subset of the full set of states (e.g. a robot can’t teleport across the building, but can only take local moves). The notation ˜O is similar to the usual O-notation but ignores log-terms. More precisely f = ˜O(g) if there are constants c1, c2 such that f ≤c1g(ln g)c2 and analogously for ˜Ω. The natural logarithm is ln and log = log2 is the base-2 logarithm. 3 Upper PAC-Bound We now introduce a new model-based algorithm, UCFH, for RL in finite horizon episodic domains. We will later prove UCFH is PAC with an upper bound on its sample complexity that is smaller than prior approaches. Like many other PAC RL algorithms [3, 13, 14, 15], UCFH uses an optimism under uncertainty approach to balance exploration and exploitation. The algorithm generally works in phases comprised of optimistic planning, policy execution and model updating that take several episodes each. Phases are indexed by k. As the agent acts in the environment and observes (s, a, r, s′) tuples, UCFH maintains a confidence set over the possible transition parameters for each state-action pair that are consistent with the observed transitions. Defining such a confidence set that holds with high probability can be be achieved using concentration inequalities like the Hoeffding inequality. One innovation in our work is to use a particular new set of conditions to define the confidence set that enables us to obtain our tighter bounds. We will discuss the confidence sets further below. The collection of these confidence sets together form a class of MDPs Mk that are consistent with the observed data. We define ˆ Mk as the maximum likelihood estimate of the MDP given the previous observations. Given Mk, UCFH computes a policy πk by performing optimistic planning. Specifically, we use a finite horizon variant of extended value iteration (EVI) [5, 14]. EVI performs modified Bellman backups that are optimistic with respect to a given set of parameters. That is, given a confidence set of possible transition model parameters, it selects in each time step the model within that set that maximizes the expected sum of future rewards. Appendix A provides more details about fixed horizon EVI. UCFH then executes πk until there is a state-action pair (s, a) that has been visited often enough since its last update (defined precisely in the until-condition in UCFH). After updating the model statistics for this (s, a)-pair, a new policy πk+1 is obtained by optimistic planning again. We refer to each such iteration of planning-execution-update as a phase with index k. If there is no ambiguity, we omit the phase indices k to avoid cluttered notation. UCFH is inspired by the infinite-horizon UCRL-γ algorithm by Lattimore and Hutter [6] but has several important differences. First, the policy can only be updated at the end of an episode, so there is no need for explicit delay phases as in UCRL-γ. Second, the policies πk in UCFH are time-dependent. Finally, UCFH can directly deal with non-sparse transition probabilities, whereas UCRL-γ only directly allows two possible successor states for each (s, a)-pair (C = 2). Confidence sets. The class of MDPs Mk consists of fixed-horizon MDPs M ′ with the known true reward function r and where the transition probability p′ t(s′|s, a) from any (s, a) ∈S × A to s′ ∈ S(s, a) at any time t is in the confidence set induced by ˆp(s′|s, a) of the empirical MDP ˆ M. Solely for the purpose of computationally more efficient optimistic planning, we allow time-dependent transitions (allows choosing different transition models in different time steps to maximize reward), but this does not affect the theoretical guarantees as the true stationary MDP is still in Mk with high 4The definition also works for time-dependent transition probabilities. 3 Algorithm 1: UCFH: Upper-Confidence Fixed-Horizon episodic reinforcement learning algorithm Input: desired accuracy ϵ ∈(0, 1], failure tolerance δ ∈(0, 1], fixed-horizon MDP M Result: with probability at least 1 −δ: ϵ-optimal policy k := 1, wmin := ϵ 4H|S|, δ1 := δ 2UmaxC , Umax := |S × A| log2 |S|H wmin ; m := 512(log2 log2 H)2 CH2 ϵ2 log2 8H2|S|2 ϵ ln 6|S×A|C log2 2(4|S|2H2/ϵ) δ ; n(s, a) = v(s, a) = n(s, a, s′) := 0 ∀, s ∈S, a ∈A, s′ ∈S(s, a); while do /* Optimistic planning */ ˆp(s′|s, a) := n(s, a, s′)/n(s, a), for all (s, a) with n(s, a) > 0 and s′ ∈S(s, a); Mk := ˜ M ∈Mnonst. : ∀(s, a) ∈S × A, t = 1 . . . H, s′ ∈S(s, a) ˜pt(s′|s, a) ∈ConfidenceSet(ˆp(s′|s, a), n(s, a)) ; ˜ Mk, πk := FixedHorizonEVI(Mk); /* Execute policy */ repeat SampleEpisode(πk) ; // from M using πk until there is a (s, a) ∈S × A with v(s, a) ≥max{mwmin, n(s, a)} and n(s, a) < |S|mH; /* Update model statistics for one (s, a)-pair with condition above */ n(s, a) := n(s, a) + v(s, a); n(s, a, s′) := n(s, a, s′) + v(s, a, s′) ∀s′ ∈S(s, a); v(s, a) := v(s, a, s′) := 0 ∀s′ ∈S(s, a); k := k + 1 Procedure SampleEpisode(π) s0 ∼p0; for t = 0 to H −1 do at := πt+1(st) and st+1 ∼p(·|st, at); v(st, at) := v(st, at) + 1 and v(st, at, st+1) := v(st, at, st+1) + 1; Function ConfidenceSet(p, n) P := p′ ∈[0, 1] :if n > 1 : |p′(1 −p′) −p(1 −p)| ≤2 ln(6/δ1) n −1 , (1) |p −p′| ≤min r ln(6/δ1) 2n , r 2p(1 −p) n ln(6/δ1) + 2 3n ln 6 δ1 ! (2) return P probability. Unlike the confidence intervals used by Lattimore and Hutter [6], we not only include conditions based on Hoeffding’s inequality5 and Bernstein’s inequality (Eq. 2), but also require that the variance p(1 −p) of the Bernoulli random variable associated with this transition is close to the empirical one (Eq. 1). This additional condition (Eq. 1) is key for making the algorithm directly applicable to generic MDPs (in which states can transition to any number of next states, e.g. C > 2) while only having a linear dependency on C in the PAC bound. 3.1 PAC Analysis For simplicity we assume that each episode starts in a fixed start state s0. This assumption is not crucial and can easily be removed by additional notational effort. Theorem 1. For any 0 < ϵ, δ ≤1, the following holds. With probability at least 1 −δ, UCFH produces a sequence of policies πk, that yield at most ˜O H2C|S × A| ϵ2 ln 1 δ episodes with R∗−Rπk = V ∗ 1:H(s0) −V πk 1:H(s0) > ϵ. The maximum number of possible successor states is denoted by 1 < C ≤|S|. 5The first condition in the min in Equation (2) is actually not necessary for the theoretical results to hold. It can be removed and all 6/δ1 can be replaced by 4/δ1. 4 Similarities to other analyses. The proof of Theorem 1 is quite long and involved, but builds on similar techniques for sample-complexity bounds in reinforcement learning (see e.g. Brafman and Tennenholtz [3], Strehl and Littman [16]). The general proof strategy is closest to the one of UCRL-γ [6] and the obtained bounds are similar if we replace the time horizon H with the equivalent in the discounted case 1/(1 −γ). However, there are important differences that we highlight now briefly. • A central quantity in the analysis by Lattimore and Hutter [6] is the local variance of the value function. The exact definition for the fixed-horizon case will be given below. The key insight for the almost tight bounds of Lattimore and Hutter [6] and Azar et al. [8] is to leverage the fact that these local variances satisfy a Bellman equation [17] and so the discounted sum of local variances can be bounded by O((1−γ)−2) instead of O((1−γ)−3). We prove in Lemma 4 that local value function variances σ2 i:j also satisfy a Bellman equation for fixed-horizon MDPs even if transition probabilities and rewards are time-dependent. This allows us to bound the total sum of local variances by O(H2) and obtain similarly strong results in this setting. • Lattimore and Hutter [6] assumed there are only two possible successor states (i.e., C = 2) which allows them to easily relate the local variances σ2 i:j to the difference of the expected value of successor states in the true and optimistic MDP (Pi −˜Pi) ˜Vi+1:j. For C > 2, the relation is less clear, but we address this by proving a bound with tight dependencies on C (Lemma C.6). • To avoid super-linear dependency on C in the final PAC bound, we add the additional condition in Equation (1) to the confidence set. We show that this allows us to upper-bound the total reward difference R∗−Rπk of policy πk with terms that either depend on σ2 i:j or decrease linearly in the number of samples. This gives the desired linear dependency on C in the final bound. We therefore avoid assuming C = 2 which makes UCFH directly applicable to generic MDPs with C > 2 without the impractical transformation argument used by Lattimore and Hutter [6]. We will now introduce the notion of knownness and importance of state-action pairs that is essential for the analysis of UCFH and subsequently present several lemmas necessary for the proof of Theorem 1. We only sketch proofs here but detailed proofs for all results are available in the appendix. Fine-grained categorization of (s, a)-pairs. Many PAC RL sample complexity proofs [3, 4, 13, 14] only have a binary notion of “knownness”, distinguishing between known (transition probability estimated sufficiently accurately) and unknown (s, a)-pairs. However, as recently shown by Lattimore and Hutter [6] for the infinite horizon setting, it is possible to obtain much tighter sample complexity results by using a more fine grained categorization. In particular, a key idea is that in order to obtain accurate estimates of the value function of a policy from a starting state, it is sufficient to have only a loose estimate of the parameters of (s, a)-pairs that are unlikely to be visited under this policy. Let the weight of a (s, a)-pair given policy πk be its expected frequency in an episode wk(s, a) := H X t=1 P(st = s, πk t (st) = a) = H X t=1 P1:t−1I{s = ·, a = πk t (s)}(s0). The importance ιk of (s, a) is its relative weight compared to wmin := ϵ 4H|S| on a log-scale ιk(s, a) := min zi : zi ≥wk(s, a) wmin where z1 = 0 and zi = 2i−2 ∀i = 2, 3, . . . . Note that ιk(s, a) ∈{0, 1, 2, 4, 8, 16 . . . } is an integer indicating the influence of the state-action pair on the value function of πk. Similarly, we define the knownness κk(s, a) := max zi : zi ≤ nk(s, a) mwk(s, a) ∈{0, 1, 2, 4, . . . } which indicates how often (s, a) has been observed relative to its importance. The constant m is defined in Algorithm 1. We can now categorize (s, a)-pairs into subsets Xk,κ,ι := {(s, a) ∈Xk : κk(s, a) = κ, ιk(s, a) = ι} and ¯Xk = S × A \ Xk where Xk = {(s, a) ∈S × A : ιk(s, a) > 0} is the active set and ¯Xk the set of state-action pairs that are very unlikely under the current policy. Intuitively, the model of UCFH is accurate if only few 5 (s, a) are in categories with low knownness – that is, important under the current policy but have not been observed often so far. Recall that over time observations are generated under many policies (as the policy is recomputed), so this condition does not always hold. We will therefore distinguish between phases k where |Xk,κ,ι| ≤κ for all κ and ι and phases where this condition is violated. The condition essentially allows for only a few (s, a) in categories that are less known and more and more (s, a) in categories that are more well known. In fact, we will show that the policy is ϵ-optimal with high probability in phases that satisfy this condition. We first show the validity of the confidence sets Mk. Lemma 1 (Capturing the true MDP whp.). M ∈Mk for all k with probability at least 1 −δ/2. Proof Sketch. By combining Hoeffding’s inequality, Bernstein’s inequality and the concentration result on empirical variances by Maurer and Pontil [18] with the union bound, we get that p(s′|s, a) ∈ P with probability at least 1−δ1 for a single phase k, fixed s, a ∈S ×A and fixed s′ ∈S(s, a). We then show that the number of model updates is bounded by Umax and apply the union bound. The following lemma bounds the number of episodes in which ∀κ, ι : |Xk,κ,ι| ≤κ is violated with high probability. Lemma 2. Let E be the number of episodes k for which there are κ and ι with |Xk,κ,ι| > κ, i.e. E = P∞ k=1 I{∃(κ, ι) : |Xk,κ,ι| > κ} and assume that m ≥ 6H2 ϵ ln 2Emax δ . Then P(E ≤ 6NEmax) ≥1 −δ/2 where N = |S × A| m and Emax = log2 H wmin log2 |S|. Proof Sketch. We first bound the total number of times a fixed pair (s, a) can be observed while being in a particular category Xk,κ,ι in all phases k for 1 ≤κ < |S|. We then show that for a particular (κ, ι), the number of episodes where |Xk,κ,ι| > κ is bounded with high probability, as the value of ι implies a minimum probability of observing each (s, a) pair in Xk,κ,ι in an episode. Since the observations are not independent we use martingale concentration results to show the statement for a fixed (κ, ι). The desired result follows with the union bound over all relevant κ and ι. The next lemma states that in episodes where the condition ∀κ, ι : |Xk,κ,ι| ≤κ is satisfied and the true MDP is in the confidence set, the expected optimistic policy value is close to the true value. This lemma is the technically most involved part of the proof. Lemma 3 (Bound mismatch in total reward). Assume M ∈Mk. If |Xk,κ,ι| ≤κ for all (κ, ι) and 0 < ϵ ≤1 and m ≥512 CH2 ϵ2 (log2 log2 H)2 log2 2 8H2|S|2 ϵ ln 6 δ1 . Then | ˜V πk 1:H(s0)−V πk 1:H(s0)| ≤ϵ. Proof Sketch. Using basic algebraic transformations, we show that |p − ˜p| ≤ p ˜p(1 −˜p)O q 1 n ln 1 δ1 + O 1 n ln 1 δ1 for each ˜p, p ∈P in the confidence set as defined in Eq. 2. Since we assume M ∈Mk, we know that p(s′|s, a) and ˜p(s′|s, a) satisfy this bound with n(s, a) for all s,a and s′. We use that to bound the difference of the expected value function of the successor state in M and ˜ M, proving that |(Pi −˜Pi) ˜Vi+1:j(s)| ≤O CH n(s,π(s)) ln 1 δ1 + O q C n(s,π(s)) ln 1 δ1 ˜σi:j(s), where the local variance of the value function is defined as σ2 i:j(s, a) := E (V π i+1:j(si+1) −P π i V π i+1:j(si))2|si = s, ai = a and σ2 i:j(s) := σ2 i:j(s, πi(s)). This bound then is applied to | ˜V1:H(s0) −V1:H(s0)| ≤PH−1 t=0 P1:t|(Pt −˜Pt) ˜Vt+1:H(s)|. The basic idea is to split the bound into a sum of two parts by partitioning of the (s, a) space by knownness, e.g. that is (st, at) ∈¯Xκ,ι for all κ and ι and (st, at) ∈¯X. Using the fact that w(st, at) and n(st, at) are tightly coupled for each (κ, ι), we can bound the expression eventually by ϵ. The final key ingredient in the remainder of the proof is to bound PH t=1 P1:t−1σt:H(s)2 by O(H2) instead of the trivial bound O(H3). To this end, we show the lemma below. Lemma 4. The variance of the value function defined as Vπ i:j(s) := E Pj t=i rt(st) −V π i:j(si) 2 |si = s satisfies a Bellman equation Vi:j = PiVi+1:j + σ2 i:j which gives Vi:j = Pj t=i Pi:t−1σ2 t:j. Since 0 ≤ V1:H ≤ H2r2 max, it follows that 0 ≤Pj t=1 Pi:t−1σ2 t:j(s) ≤H2r2 max for all s ∈S. 6 0 2 1 ... n + − r(+) = 1 r(−) = 0 p(i|0, a) = 1 n p(+|i, a) = 1 2 + ϵ′ i(a) p(−|i, a) = 1 2 −ϵ′ i(a) Figure 1: Class of a hard-to-learn finite horizon MDPs. The function ϵ′ is defined as ϵ′(a1) = ϵ/2, ϵ′(a∗ i ) = ϵ and otherwise ϵ′(a) = 0 where a∗ i is an unknown action per state i and ϵ is a parameter. Proof Sketch. The proof works by induction and uses fact that the value function satisfies the Bellman equation and the tower-property of conditional expectations. Proof Sketch for Theorem 1. The proof of Theorem 1 consists of the following major parts: 1. The true MDP is in the set of MDPs Mk for all phases k with probability at least 1−δ 2 (Lemma 1). 2. The FixedHorizonEVI algorithm computes a value function whose optimistic value is higher than the optimal reward in the true MDP with probability at least 1 −δ/2 (Lemma A.1). 3. The number of episodes with |Xk,κ,ι| > κ for some κ and ι are bounded with probability at least 1 −δ/2 by ˜O(|S × A| m) if m = ˜Ω H2 ϵ ln |S| δ (Lemma 2). 4. If |Xk,κ,ι| ≤κ for all κ, ι, i.e., relevant state-action pairs are sufficiently known and m = ˜Ω CH2 ϵ2 ln 1 δ1 , then the optimistic value computed is ϵ-close to the true MDP value. Together with part 2, we get that with high probability, the policy πk is ϵ-optimal in this case. 5. From parts 3 and 4, with probability 1 −δ, there are at most ˜O C|S×A|H2 ϵ2 ln 1 δ episodes that are not ϵ-optimal. 4 Lower PAC Bound Theorem 2. There exist positive constants c1, c2, δ0, ϵ0 such that for every δ ∈(0, δ0) and ϵ ∈ (0, ϵ0) and for every algorithm A that satisfies a PAC guarantee for (ϵ, δ) and outputs a deterministic policy, there is a fixed-horizon episodic MDP Mhard with E[nA] ≥c1(H −2)2(|A| −1)(|S| −3) ϵ2 ln c2 δ + c3 = Ω |S × A|H2 ϵ2 ln c2 δ + c3 (3) where nA is the number of episodes until the algorithm’s policy is (ϵ, δ)-accurate. The constants can be set to δ0 = e−4 80 ≈ 1 5000, ϵ0 = H−2 640e4 ≈H/35000, c2 = 4 and c3 = e−4/80. The ranges of possible δ and ϵ are of similar order than in other state-of-the-art lower bounds for multi-armed bandits [19] and discounted MDPs [14, 6]. They are mostly determined by the bandit result by Mannor and Tsitsiklis [19] we build on. Increasing the parameter limits δ0 and ϵ0 for bandits would immediately result in larger ranges in our lower bound, but this was not the focus of our analysis. Proof Sketch. The basic idea is to show that the class of MDPs shown in Figure 1 require at least a number of observed episodes of the order of Equation (3). From the start state 0, the agent ends up in states 1 to n with equal probability, independent of the action. From each such state i, the agent transitions to either a good state + with reward 1 or a bad state −with reward 0 and stays there for the rest of the episode. Therefore, each state i = 1, . . . , n is essentially a multi-armed bandit with binary rewards of either 0 or H −2. For each bandit, the probability of ending up in + or −is equal except for the first action a1 with p(st+1 = +|st = i, at = a1) = 1/2 + ϵ/2 and possibly an unknown optimal action a∗ i (different for each state i) with p(st+1 = +|st = i, at = a∗ i ) = 1/2 + ϵ. In the episodic fixed-horizon setting we are considering, taking a suboptimal action in one of the bandits does not necessarily yield a suboptimal episode. We have to consider the average over all bandits instead. In an ϵ-optimal episode, the agent therefore needs to follow a policy that would solve at least a certain portion of all n multi-armed bandits with probability at least 1 −δ. We show that the best strategy for the agent to achieve this is to try to solve all bandits with equal probability. The number of samples required to do so then results in the lower bound in Equation (3). 7 Similar MDPs that essentially solve multiple of such multi-armed bandits have been used to prove lower sample-complexity bounds for discounted MDPs [14, 6]. However, the analysis in the infinite horizon case as well as for the sliding-window fixed-horizon optimality criterion considered by Kakade [4] is significantly simpler. For these criteria, every time step the agent follows a policy that is not ϵ-optimal counts as a ”mistake”. Therefore, every time the agent does not pick the optimal arm in any of the multi-armed bandits counts as a mistake. This contrasts with our fixed-horizon setting where we must instead consider taking an average over all bandits. 5 Related Work on Fixed-Horizon Sample Complexity Bounds We are not aware of any lower sample complexity bounds beyond multi-armed bandit results that directly apply to our setting. Our upper bound in Theorem 1 improves upon existing results by at least a factor of H. We briefly review those existing results in the following. Timestep bounds. Kakade [4, Chapter 8] proves upper and lower PAC bounds for a similar setting where the agent interacts indefinitely with the environment but the interactions are divided in segments of equal length and the agent is evaluated by the expected sum of rewards until the end of each segment. The bound states that there are not more than ˜O |S|2|A|H6 ϵ3 ln 1 δ 6 time steps in which the agents acts ϵ-suboptimal. Strehl et al. [1] improves the state-dependency of these bounds for their delayed Q-learning algorithm to ˜O |S||A|H5 ϵ4 ln 1 δ . However, in episodic MDP it is more natural to consider performance on the entire episode since suboptimality near the end of the episode is no issue as long as the total reward on the entire episode is sufficiently high. Kolter and Ng [9] use an interesting sliding-window criterion, but prove bounds for a Bayesian setting instead of PAC. Timestep-based bounds can be applied to the episodic case by augmenting the original statespace with a time-index per episode to allow resets after H steps. This adds H dependencies for each |S| in the original bound which results in a horizon-dependency of at least H6 of these existing bounds. Translating the regret bounds of UCRL2 in Corollary 3 by Jaksch et al. [20] yields a PAC-bound on the number of episodes of at least ˜O |S|2|A|H3 ϵ2 ln 1 δ even if one ignores the reset after H time steps. Timestep-based lower PAC-bounds cannot be applied directly to the episodic reward criterion. Episode bounds. Similar to us, Fiechter [10] uses the value of initial states as optimality-criterion, but defines the value w.r.t. the γ-discounted infinite horizon. His results of order ˜O |S|2|A|H7 ϵ2 ln 1 δ episodes of length ˜O(1/(1 −γ)) ≈˜O(H) are therefore not directly applicable to our setting. Auer and Ortner [5] investigate the same setting as we and propose a UCB-type algorithm that has noregret, which translates into a basic PAC bound of order ˜O |S|10|A|H7 ϵ3 ln 1 δ episodes. We improve on this bound substantially in terms of its dependency on H, |S| and ϵ. Reveliotis and Bountourelis [12] also consider the episodic undiscounted fixed-horizon setting and present an efficient algorithm in cases where the transition graph is acyclic and the agent knows for each state a policy that visits this state with a known minimum probability q. These assumptions are quite limiting and rarely hold in practice and their bound of order ˜O |S||A|H4 ϵ2q ln 1 δ explicitly depends on 1/q. 6 Conclusion We have shown upper and lower bounds on the sample complexity of episodic fixed-horizon RL that are tight up to log-factors in the time horizon H, the accuracy ϵ, the number of actions |A| and up to an additive constant in the failure probability δ. These bounds improve upon existing results by a factor of at least H. One might hope to reduce the dependency of the upper bound on |S| to be linear by an analysis similar to Mormax [7] for discounted MDPs which has sample complexity linear in |S| at the penalty of additional dependencies on H. Our proposed UCFH algorithm that achieves our PAC bound can be applied to directly to a wide range of fixed-horizon episodic MDPs with known rewards and does not require additional structure such as sparse or acyclic state transitions assumed in previous work. The empirical evaluation of UCFH is an interesting direction for future work. Acknowledgments: We thank Tor Lattimore for the helpful suggestions and comments. This work was supported by an NSF CAREER award and the ONR Young Investigator Program. 6For comparison we adapt existing bounds to our setting. While the original bound stated by Kakade [4] only has H3, an additional H3 comes in through ϵ−3 due to different normalization of rewards. 8 References [1] Alexander L. Strehl, Lihong Li, Eric Wiewiora, John Langford, and Michael L. Littman. PAC Model-Free Reinforcement Learning. In International Conference on Machine Learning, 2006. [2] Michael J Kearns and Satinder P Singh. Finite-Sample Convergence Rates for Q-Learning and Indirect Algorithms. In Advances in Neural Information Processing Systems, 1999. [3] Ronen I Brafman and Moshe Tennenholtz. R-MAX – A General Polynomail Time Algorithm for Near-Optimal Reinforcement Learning. Journal of Machine Learning Research, 3:213– 231, 2002. [4] Sham M. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003. [5] Peter Auer and Ronald Ortner. Online Regret Bounds for a New Reinforcement Learning Algorithm. In Proceedings 1st Austrian Cognitive Vision Workshop, 2005. [6] Tor Lattimore and Marcus Hutter. PAC bounds for discounted MDPs. In International Conference on Algorithmic Learning Theory, 2012. [7] Istv`an Szita and Csaba Szepesv´ari. Model-based reinforcement learning with nearly tight exploration complexity bounds. In International Conference on Machine Learning, 2010. [8] Mohammad Gheshlaghi Azar, R´emi Munos, and Hilbert J. Kappen. On the Sample Complexity of Reinforcement Learning with a Generative Model. In International Conference on Machine Learning, 2012. [9] J Zico Kolter and Andrew Y Ng. Near-Bayesian exploration in polynomial time. In International Conference on Machine Learning, 2009. [10] Claude-Nicolas Fiechter. Efficient reinforcement learning. In Conference on Learning Theory, 1994. [11] Claude-Nicolas Fiechter. Expected Mistake Bound Model for On-Line Reinforcement Learning. In International Conference on Machine Learning, 1997. [12] Spyros Reveliotis and Theologos Bountourelis. Efficient PAC learning for episodic tasks with acyclic state spaces. Discrete Event Dynamic Systems: Theory and Applications, 17(3):307– 327, 2007. [13] Alexander L Strehl, Lihong Li, and Michael L Littman. Incremental Model-based Learners With Formal Learning-Time Guarantees. In Conference on Uncertainty in Artificial Intelligence, 2006. [14] Alexander L Strehl, Lihong Li, and Michael L Littman. Reinforcement Learning in Finite MDPs : PAC Analysis. Journal of Machine Learning Research, 10:2413–2444, 2009. [15] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal Regret Bounds for Reinforcement Learning. In Advances in Neural Information Processing Systems, 2010. [16] Alexander L. Strehl and Michael L. Littman. An analysis of model-based Interval Estimation for Markov Decision Processes. Journal of Computer and System Sciences, 74(8):1309–1331, dec 2008. [17] Matthew J Sobel. The Variance of Markov Decision Processes. Journal of Applied Probability, 19(4):794–802, 1982. [18] Andreas Maurer and Massimiliano Pontil. Empirical Bernstein Bounds and Sample-Variance Penalization. In Conference on Learning Theory, 2009. [19] Shie Mannor and John N Tsitsiklis. The Sample Complexity of Exploration in the Multi-Armed Bandit Problem. Journal of Machine Learning Research, 5:623–648, 2004. [20] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal Regret Bounds for Reinforcement Learning. Journal of Machine Learning Research, 11:1563–1600, 2010. [21] Fan Chung and Linyuan Lu. Concentration Inequalities and Martingale Inequalities: A Survey. Internet Mathematics, 3(1):79–127, 2006. 9 | 2015 | 80 |
5,973 | Algorithms with Logarithmic or Sublinear Regret for Constrained Contextual Bandits Huasen Wu University of California at Davis hswu@ucdavis.edu R. Srikant University of Illinois at Urbana-Champaign rsrikant@illinois.edu Xin Liu University of California at Davis liu@cs.ucdavis.edu Chong Jiang University of Illinois at Urbana-Champaign jiang17@illinois.edu Abstract We study contextual bandits with budget and time constraints, referred to as constrained contextual bandits. The time and budget constraints significantly complicate the exploration and exploitation tradeoff because they introduce complex coupling among contexts over time. To gain insight, we first study unit-cost systems with known context distribution. When the expected rewards are known, we develop an approximation of the oracle, referred to Adaptive-Linear-Programming (ALP), which achieves near-optimality and only requires the ordering of expected rewards. With these highly desirable features, we then combine ALP with the upper-confidence-bound (UCB) method in the general case where the expected rewards are unknown a priori. We show that the proposed UCB-ALP algorithm achieves logarithmic regret except for certain boundary cases. Further, we design algorithms and obtain similar regret bounds for more general systems with unknown context distribution and heterogeneous costs. To the best of our knowledge, this is the first work that shows how to achieve logarithmic regret in constrained contextual bandits. Moreover, this work also sheds light on the study of computationally efficient algorithms for general constrained contextual bandits. 1 Introduction The contextual bandit problem [1, 2, 3] is an important extension of the classic multi-armed bandit (MAB) problem [4], where the agent can observe a set of features, referred to as context, before making a decision. After the random arrival of a context, the agent chooses an action and receives a random reward with expectation depending on both the context and action. To maximize the total reward, the agent needs to make a careful tradeoff between taking the best action based on the historical performance (exploitation) and discovering the potentially better alternative actions under a given context (exploration). This model has attracted much attention as it fits the personalized service requirement in many applications such as clinical trials, online recommendation, and online hiring in crowdsourcing. Existing works try to reduce the regret of contextual bandits by leveraging the structure of the context-reward models such as linearity [5] or similarity [6], and more recent work [7] focuses on computationally efficient algorithms with minimum regret. For Markovian context arrivals, algorithms such as UCRL [8] for more general reinforcement learning problem can be used to achieve logarithmic regret. However, traditional contextual bandit models do not capture an important characteristic of real systems: in addition to time, there is usually a cost associated with the resource consumed by each action and the total cost is limited by a budget in many applications. Taking crowdsourcing [9] as an example, the budget constraint for a given set of tasks will limit the number of workers that an employer can hire. Another example is the clinical trials [10], where each treatment is usually costly and the budget of a trial is limited. Although budget constraints have been studied in non-contextual bandits where logarithmic or sublinear regret is achieved [11, 12, 13, 14, 15, 16], as we will see later, these results are inapplicable in the case with observable contexts. 1 In this paper, we study contextual bandit problems with budget and time constraints, referred to as constrained contextual bandits, where the agent is given a budget B and a time-horizon T. In addition to a reward, a cost is incurred whenever an action is taken under a context. The bandit process ends when the agent runs out of either budget or time. The objective of the agent is to maximize the expected total reward subject to the budget and time constraints. We are interested in the regime where B and T grow towards infinity proportionally. The above constrained contextual bandit problem can be viewed as a special case of Resourceful Contextual Bandits (RCB) [17]. In [17], RCB is studied under more general settings with possibly infinite contexts, random costs, and multiple budget constraints. A Mixture Elimination algorithm is proposed and shown to achieve O( √ T) regret. However, the benchmark for the definition of regret in [17] is restricted to within a finite policy set. Moreover, the Mixture Elimination algorithm suffers high complexity and the design of computationally efficient algorithms for such general settings is still an open problem. To tackle this problem, motivated by certain applications, we restrict the set of parameters in our model as follows: we assume finite discrete contexts, fixed costs, and a single budget constraint. This simplified model is justified in many scenarios such as clinical trials [10] and rate selection in wireless networks [18]. More importantly, these simplifications allow us to design easily-implementable algorithms that achieve O(log T) regret (except for a set of parameters of zero Lebesgue measure, which we refer to as boundary cases), where the regret is defined more naturally as the performance gap between the proposed algorithm and the oracle, i.e., the optimal algorithm with known statistics. Even with simplified assumptions considered in this paper, the exploration-exploitation tradeoff is still challenging due to the budget and time constraints. The key challenge comes from the complexity of the oracle algorithm. With budget and time constraints, the oracle algorithm cannot simply take the action that maximizes the instantaneous reward. In contrast, it needs to balance between the instantaneous and long-term rewards based on the current context and the remaining budget. In principle, dynamic programming (DP) can be used to obtain this balance. However, using DP in our scenario incurs difficulties in both algorithm design and analysis: first, the implementation of DP is computationally complex due to the curse of dimensionality; second, it is difficult to obtain a benchmark for regret analysis, since the DP algorithm is implemented in a recursive manner and its expected total reward is hard to be expressed in a closed form; third, it is difficult to extend the DP algorithm to the case with unknown statistics, due to the difficulty of evaluating the impact of estimation errors on the performance of DP-type algorithms. To address these difficulties, we first study approximations of the oracle algorithm when the system statistics are known. Our key idea is to approximate the oracle algorithm with linear programming (LP) that relaxes the hard budget constraint to an average budget constraint. When fixing the average budget constraint at B/T, this LP approximation provides an upper bound on the expected total reward, which serves as a good benchmark in regret analysis. Further, we propose an Adaptive Linear Programming (ALP) algorithm that adjusts the budget constraint to the average remaining budget bτ/τ, where τ is the remaining time and bτ is the remaining budget. Note that although the idea of approximating a DP problem with an LP problem has been widely studied in literature (e.g., [17, 19]), the design and analysis of ALP here is quite different. In particular, we show that ALP achieves O(1) regret, i.e., its expected total reward is within a constant independent of T from the optimum, except for certain boundaries. This ALP approximation and its regret analysis make an important step towards achieving logarithmic regret for constrained contextual bandits. Using the insights from the case with known statistics, we study algorithms for constrained contextual bandits with unknown expected rewards. Complicated interactions between information acquisition and decision making arise in this case. Fortunately, the ALP algorithm has a highly desirable property that it only requires the ordering of the expected rewards and can tolerate certain estimation errors of system parameters. This property allows us to combine ALP with estimation methods that can efficiently provide a correct rank of the expected rewards. In this paper, we propose a UCB-ALP algorithm by combining ALP with the upper-confidence-bound (UCB) method [4]. We show that UCB-ALP achieves O(log T) regret except for certain boundary cases, where its regret is O( √ T). We note that UCB-type algorithms are proposed in [20] for non-contextual bandits with concave rewards and convex constraints, and further extended to linear contextual bandits. However, [20] focuses on static contexts1 and achieves O( √ T) regret in our setting since it uses a fixed budget constraint in each round. In comparison, we consider random context arrivals and use an adaptive 1After the online publication of our preliminary version, two recent papers [21, 22] extend their previous work [20] to the dynamic context case, where they focus on possibly infinite contexts and achieve O( √ T) regret, and [21] restricts to a finite policy set as [17]. 2 budget constraint to achieve logarithmic regret. To the best of our knowledge, this is the first work that shows how to achieve logarithmic regret in constrained contextual bandits. Moreover, the proposed UCB-ALP algorithm is quite computationally efficient and we believe these results shed light on addressing the open problem of general constrained contextual bandits. Although the intuition behind ALP and UCB-ALP is natural, the rigorous analysis of their regret is non-trivial since we need to consider many interacting factors such as action/context ranking errors, remaining budget fluctuation, and randomness of context arrival. We evaluate the impact of these factors using a series of novel techniques, e.g., the method of showing concentration properties under adaptive algorithms and the method of bounding estimation errors under random contexts. For the ease of exposition, we study the ALP and UCB-ALP algorithms in unit-cost systems with known context distribution in Sections 3 and 4, respectively. Then we discuss the generalization to systems with unknown context distribution in Section 5 and with heterogeneous costs in Section 6, which are much more challenging and the details can be found in the supplementary material. 2 System Model We consider a contextual bandit problem with a context set X = {1, 2, . . . , J} and an action set A = {1, 2, . . . , K}. At each round t, a context Xt arrives independently with identical distribution P{Xt = j} = πj, j ∈X, and each action k ∈A generates a non-negative reward Yk,t. Under a given context Xt = j, the reward Yk,t’s are independent random variables in [0, 1]. The conditional expectation E[Yk,t|Xt = j] = uj,k is unknown to the agent. Moreover, a cost is incurred if action k is taken under context j. To gain insight into constrained contextual bandits, we consider fixed and known costs in this paper, where the cost is cj,k > 0 when action k is taken under context j. Similar to traditional contextual bandits, the context Xt is observable at the beginning of round t, while only the reward of the action taken by the agent is revealed at the end of round t. At the beginning of round t, the agent observes the context Xt and takes an action At from {0}∪A, where “0” represents a dummy action that the agent skips the current context. Let Yt and Zt be the reward and cost for the agent in round t, respectively. If the agent takes an action At = k > 0, then the reward is Yt = Yk,t and the cost is Zt = cXt,k. Otherwise, if the agent takes the dummy action At = 0, neither reward nor cost is incurred, i.e., Yt = 0 and Zt = 0. In this paper, we focus on contextual bandits with a known time-horizon T and limited budget B. The bandit process ends when the agent runs out of the budget or at the end of time T. A contextual bandit algorithm Γ is a function that maps the historical observations Ht−1 = (X1, A1, Y1; X2, A2, Y2; . . . ; Xt−1, At−1, Yt−1) and the current context Xt to an action At ∈ {0} ∪A. The objective of the algorithm is to maximize the expected total reward UΓ(T, B) for a given time-horizon T and a budget B, i.e., maximizeΓ UΓ(T, B) = EΓ T X t=1 Yt subject to T X t=1 Zt ≤B, where the expectation is taken over the distributions of contexts and rewards. Note that we consider a “hard” budget constraint, i.e., the total costs should not be greater than B under any realization. We measure the performance of the algorithm Γ by comparing it with the oracle, which is the optimal algorithm with known statistics, including the knowledge of πj’s, uj,k’s, and cj,k’s. Let U ∗(T, B) be the expected total reward obtained by the oracle algorithm. Then, the regret of the algorithm Γ is defined as RΓ(T, B) = U ∗(T, B) −UΓ(T, B). The objective of the algorithm is then to minimize the regret. We are interested in the asymptotic regime where the time-horizon T and the budget B grow to infinity proportionally, i.e., with a fixed ratio ρ = B/T. 3 Approximations of the Oracle In this section, we study approximations of the oracle, where the statistics of bandits are known to the agent. This will provide a benchmark for the regret analysis and insights into the design of constrained contextual bandit algorithms. 3 As a starting point, we focus on unit-cost systems, i.e., cj,k = 1 for each j and k, from Section 3 to Section 5, which will be relaxed in Section 6. In unit-cost systems, the quality of action k under context j is fully captured by its expected reward uj,k. Let u∗ j be the highest expected reward under context j, and k∗ j be the best action for context j, i.e., u∗ j = maxk∈A uj,k and k∗ j = arg maxk∈A uj,k. For ease of exposition, we assume that the best action under each context is unique, i.e., uj,k < u∗ j for all j and k ̸= k∗ j . Similarly, we also assume u∗ 1 > u∗ 2 > . . . > u∗ J for simplicity. With the knowledge of uj,k’s, the agent knows the best action k∗ j and its expected reward u∗ j under any context j. In each round t, the task of the oracle is deciding whether to take action k∗ Xt or not depending on the remaining time τ = T −t + 1 and the remaining budget bτ. The special case of two-context systems (J = 2) is trivial, where the agent just needs to procrastinate for the better context (see Appendix D of the supplementary material). When considering more general cases with J > 2, however, it is computationally intractable to exactly characterize the oracle solution. Therefore, we resort to approximations based on linear programming (LP). 3.1 Upper Bound: Static Linear Programming We propose an upper bound for the expected total reward U ∗(T, B) of the oracle by relaxing the hard constraint to an average constraint and solving the corresponding constrained LP problem. Specifically, let pj ∈[0, 1] be the probability that the agent takes action k∗ j for context j, and 1 −pj be the probability that the agent skips context j (i.e., taking action At = 0). Denote the probability vector as p = (p1, p2, . . . , pJ). For a time-horizon T and budget B, consider the following LP problem: (LPT,B) maximizep J X j=1 pjπju∗ j, (1) subject to J X j=1 pjπj ≤B/T, (2) p ∈[0, 1]J. Define the following threshold as a function of the average budget ρ = B/T: ˜j(ρ) = max{j : j X j′=1 πj′ ≤ρ} (3) with the convention that ˜j(ρ) = 0 if π1 > ρ. We can verify that the following solution is optimal for LPT,B: pj(ρ) = 1, if 1 ≤j ≤˜j(ρ), ρ−P˜j(ρ) j′=1 πj′ π˜j(ρ)+1 , if j = ˜j(ρ) + 1, 0, if j > ˜j(ρ) + 1. (4) Correspondingly, the optimal value of LPT,B is v(ρ) = ˜j(ρ) X j=1 πju∗ j + p˜j(ρ)+1(ρ)π˜j(ρ)+1u∗ ˜j(ρ)+1. (5) This optimal value v(ρ) can be viewed as the maximum expected reward in a single round with average budget ρ. Summing over the entire horizon, the total expected reward becomes bU(T, B) = Tv(ρ), which is an upper bound of U ∗(T, B). Lemma 1. For a unit-cost system with known statistics, if the time-horizon is T and the budget is B, then bU(T, B) ≥U ∗(T, B). The proof of Lemma 1 is available in Appendix A of the supplementary material. With Lemma 1, we can bound the regret of any algorithm by comparing its performance with the upper bound bU(T, B) instead of U ∗(T, B). Since bU(T, B) has a simple expression, as we will see later, it significantly reduces the complexity of regret analysis. 4 3.2 Adaptive Linear Programming Although the solution (4) provides an upper bound on the expected reward, using such a fixed algorithm will not achieve good performance as the ratio bτ/τ, referred to as average remaining budget, fluctuates over time. We propose an Adaptive Linear Programming (ALP) algorithm that adjusts the threshold and randomization probability according to the instantaneous value of bτ/τ. Specifically, when the remaining time is τ and the remaining budget is bτ = b, we consider an LP problem LPτ,b which is the same as LPT,B except that B/T in Eq. (2) is replaced with b/τ. Then, the optimal solution for LPτ,b can be obtained by replacing ρ in Eqs. (3), (4), and (5) with b/τ. The ALP algorithm then makes decisions based on this optimal solution. ALP Algorithm: At each round t with remaining budget bτ = b, obtain pj(b/τ)’s by solving LPτ,b; take action At = k∗ Xt with probability pXt(b/τ), and At = 0 with probability 1 −pXt(b/τ). The above ALP algorithm only requires the ordering of the expected rewards instead of their accurate values. This highly desirable feature allows us to combine ALP with classic MAB algorithms such as UCB [4] for the case without knowledge of expected rewards. Moreover, this simple ALP algorithm achieves very good performance within a constant distance from the optimum, i.e., O(1) regret, except for certain boundary cases. Specifically, for 1 ≤j ≤J, let qj be the cumulative probability defined as qj = Pj j′=1 πj′ with the convention that q0 = 0. The following theorem states the near optimality of ALP. Theorem 1. Given any fixed ρ ∈(0, 1), the regret of ALP satisfies: 1) (Non-boundary cases) if ρ ̸= qj for any j ∈{1, 2, . . . , J −1}, then RALP(T, B) ≤ u∗ 1−u∗ J 1−e−2δ2 , where δ = min{ρ −q˜j(ρ), q˜j(ρ)+1 −ρ}. 2) (Boundary cases) if ρ = qj for some j ∈{1, 2, . . . , J −1}, then RALP(T, B) ≤Θ(o)√ T + u∗ 1−u∗ J 1−e−2(δ′)2 , where Θ(o) = 2(u∗ 1 −u∗ J) p ρ(1 −ρ) and δ′ = min{ρ −q˜j(ρ)−1, q˜j(ρ)+1 −ρ}. Theorem 1 shows that ALP achieves O(1) regret except for certain boundary cases, where it still achieves O( √ T) regret. This implies that the regret due to the linear relaxation is negligible in most cases. Thus, when the expected rewards are unknown, we can achieve low regret, e.g., logarithmic regret, by combining ALP with appropriate information-acquisition mechanisms. Sketch of Proof: Although the ALP algorithm seems fairly intuitive, its regret analysis is nontrivial. The key to the proof is to analyze the evolution of the remaining budget bτ by mapping ALP to “sampling without replacement”. Specifically, from Eq. (4), we can verify that when the remaining time is τ and the remaining budget is bτ = b, the system consumes one unit of budget with probability b/τ, and consumes nothing with probability 1 −b/τ. When considering the remaining budget, the ALP algorithm can be viewed as “sampling without replacement”. Thus, we can show that bτ follows the hypergeometric distribution [23] and has the following properties: Lemma 2. Under the ALP algorithm, the remaining budget bτ satisfies: 1) The expectation and variance of bτ are E[bτ] = ρτ and Var(bτ) = T −τ T −1τρ(1 −ρ), respectively. 2) For any positive number δ satisfying 0 < δ < min{ρ, 1 −ρ}, the tail distribution of bτ satisfies P{bτ < (ρ −δ)τ} ≤e−2δ2τ and P{bτ > (ρ + δ)τ} ≤e−2δ2τ. Then, we prove Theorem 1 based on Lemma 2. Note that the expected total reward under ALP is UALP(T, B) = E PT τ=1 v(bτ/τ) , where v(·) is defined in (5) and the expectation is taken over the distribution of bτ. For the non-boundary cases, the single-round expected reward satisfies E[v(bτ/τ)] = v(ρ) if the threshold ˜j(bτ/τ) = ˜j(ρ) for all possible bτ’s. The regret then is bounded by a constant because the probability of the event ˜j(bτ/τ) ̸= ˜j(ρ) decays exponentially due to the concentration property of bτ. For the boundary cases, we show the conclusion by relating the regret with the variance of bτ. Please refer to Appendix B of the supplementary material for details. 4 UCB-ALP Algorithm for Constrained Contextual Bandits Now we get back to the constrained contextual bandits, where the expected rewards are unknown to the agent. We assume the agent knows the context distribution as [17], which will be relaxed in Section 5. Thanks to the desirable properties of ALP, the maxim of “optimism under uncertainty” 5 [8] is still applicable and ALP can be extended to the bandit settings when combined with estimation policies that can quickly provide correct ranking with high probability. Here, combining ALP with the UCB method [4], we propose a UCB-ALP algorithm for constrained contextual bandits. 4.1 UCB: Notations and Property Let Cj,k(t) be the number of times that action k ∈A has been taken under context j up to round t. If Cj,k(t −1) > 0, let ¯uj,k(t) be the empirical reward of action k under context j, i.e., ¯uj,k(t) = 1 Cj,k(t−1) Pt−1 t′=1 Yt′1(Xt′ = j, At′ = k), where 1(·) is the indicator function. We define the UCB of uj,k at t as ˆuj,k(t) = ¯uj,k(t) + q log t 2Cj,k(t−1) for Cj,k(t −1) > 0, and ˆuj,k(t) = 1 for Cj,k(t − 1) = 0. Furthermore, we define the UCB of the maximum expected reward under context j as ˆu∗ j(t) = maxk∈A ˆuj,k(t). As suggested in [24], we use a smaller coefficient in the exploration term q log t 2Cj,k(t−1) than the traditional UCB algorithm [4] to achieve better performance. We present the following property of UCB that is important in regret analysis. Lemma 3. For two context-action pairs, (j, k) and (j′, k′), if uj,k < uj′,k′, then for any t ≤T, P{ˆuj,k(t) ≥ˆuj′,k′(t)|Cj,k(t −1) ≥ℓj,k} ≤2t−1, (6) where ℓj,k = 2 log T (uj′,k′−uj,k)2 . Lemma 3 states that for two context-action pairs, the ordering of their expected rewards can be identified correctly with high probability, as long as the suboptimal pair has been executed for sufficient times (on the order of O(log T)). This property has been widely applied in the analysis of UCBbased algorithms [4, 13], and its proof can be found in [13, 25] with a minor modification on the coefficients. 4.2 UCB-ALP Algorithm We propose a UCB-based adaptive linear programming (UCB-ALP) algorithm, as shown in Algorithm 1. As indicated by the name, the UCB-ALP algorithm maintains UCB estimates of expected rewards for all context-action pairs and then implements the ALP algorithm based on these estimates. Note that the UCB estimates ˆu∗ j(t)’s may be non-decreasing in j. Thus, the solution of LPτ,b based on ˆu∗ j(t) depends on the actual ordering of ˆu∗ j(t)’s and may be different from Eq. (4). We use ˆpj(·) rather than pj(·) to indicate this difference. Algorithm 1 UCB-ALP Input: Time-horizon T, budget B, and context distribution πj’s; Init: τ = T, b = B; Cj,k(0) = 0, ¯uj,k(0) = 0, ˆuj,k(0) = 1, ∀j ∈X and ∀k ∈A; ˆu∗ j(0) = 1, ∀j ∈X; for t = 1 to T do k∗ j (t) ←arg maxk ˆuj,k(t), ∀j; ˆu∗ j(t) ←ˆu∗ j,k∗ j (t)(t); if b > 0 then Obtain the probabilities ˆpj(b/τ)’s by solving LPτ,b with u∗ j replaced by ˆu∗ j(t); Take action k∗ Xt(t) with probability ˆpXt(b/τ); end if Update τ, b, Cj,k(t), ¯uj,k(t), and ˆuj,k(t). end for 4.3 Regret of UCB-ALP We study the regret of UCB-ALP in this section. Due to space limitations, we only present a sketch of the analysis. Specific representations of the regret bounds and proof details can be found in the supplementary material. Recall that qj = Pj j′=1 πj′ (1 ≤j ≤J) are the boundaries defined in Section 3. We show that as the budget B and the time-horizon T grow to infinity in proportion, the proposed UCB-ALP algorithm achieves logarithmic regret except for the boundary cases. 6 Theorem 2. Given πj’s, uj,k’s and a fixed ρ ∈(0, 1), the regret of UCB-ALP satisfies: 1) (Non-boundary cases) if ρ ̸= qj for any j ∈{1, 2, . . . , J −1}, then the regret of UCB-ALP is RUCB−ALP(T, B) = O JK log T . 2) (Boundary cases) if ρ = qj for some j ∈{1, 2, . . . , J −1}, then the regret of UCB-ALP is RUCB−ALP(T, B) = O √ T + JK log T . Theorem 2 differs from Theorem 1 by an additional term O(JK log T). This term results from using UCB to learn the ordering of expected rewards. Under UCB, each of the JK content-action pairs should be executed roughly O(log T) times to obtain the correct ordering. For the non-boundary cases, UCB-ALP is order-optimal because obtaining the correct action ranking under each context will result in O(log T) regret [26]. Note that our results do not contradict the lower bound in [17] because we consider discrete contexts and actions, and focus on instance-dependent regret. For the boundary cases, we keep both the √ T and log T terms because the constant in the log T term is typically much larger than that in the √ T term. Therefore, the log T term may dominate the regret particularly when the number of context-action pairs is large for medium T. It is still an open problem if one can achieve regret lower than O( √ T) in these cases. Sketch of Proof: We bound the regret of UCB-ALP by comparing its performance with the benchmark bU(T, B). The analysis of this bound is challenging due to the close interactions among different sources of regret and the randomness of context arrivals. We first partition the regret according to the sources and then bound each part of regret, respectively. Step 1: Partition the regret. By analyzing the implementation of UCB-ALP, we show that its regret is bounded as RUCB−ALP(T, B) ≤R(a) UCB−ALP(T, B) + R(c) UCB−ALP(T, B), where the first part R(a) UCB−ALP(T, B) = PJ j=1 P k̸=k∗ j (u∗ j −uj,k)E[Cj,k(T)] is the regret from action ranking errors within a context, and the second part R(c) UCB−ALP(T, B) = PT τ=1 E v(ρ) − PJ j=1 ˆpj(bτ/τ)πju∗ j is the regret from the fluctuations of bτ and context ranking errors. Step 2: Bound each part of regret. For the first part, we can show that R(a) UCB−ALP(T, B) = O(log T) using similar techniques for traditional UCB methods [25]. The major challenge of regret analysis for UCB-ALP then lies in the evaluation of the second part R(c) UCB−ALP(T, B). We first verify that the evolution of bτ under UCB-ALP is similar to that under ALP and Lemma 2 still holds under UCB-ALP. With respect to context ranking errors, we note that unlike classic UCB methods, not all context ranking errors contribute to the regret due to the threshold structure of ALP. Therefore, we carefully categorize the context ranking results based on their contributions. We briefly discuss the analysis for the non-boundary cases here. Recall that ˜j(ρ) is the threshold for the static LP problem LPT,B. We define the following events that capture all possible ranking results based on UCBs: Erank,0(t) = ∀j ≤˜j(ρ), ˆu∗ j(t) > ˆu∗ ˜j(ρ)+1(t); ∀j > ˜j(ρ) + 1, ˆu∗ j(t) < ˆu∗ ˜j(ρ)+1(t) , Erank,1(t) = ∃j ≤˜j(ρ), ˆu∗ j(t) ≤ˆu∗ ˜j(ρ)+1(t); ∀j > ˜j(ρ) + 1, ˆu∗ j(t) < ˆu∗ ˜j(ρ)+1(t) , Erank,2(t) = ∃j > ˜j(ρ) + 1, ˆu∗ j(t) ≥ˆu∗ ˜j(ρ)+1(t) . The first event Erank,0(t) indicates a roughly correct context ranking, because under Erank,0(t) UCBALP obtains a correct solution for LPτ,bτ if bτ/τ ∈[q˜j(ρ), q˜j(ρ)+1]. The last two events Erank,s(t), s = 1, 2, represent two types of context ranking errors: Erank,1(t) corresponds to “certain contexts with above-threshold reward having lower UCB”, while Erank,2(t) corresponds to “certain contexts with below-threshold reward having higher UCB”. Let T (s) = PT t=1 1(Erank,s(t)) for 0 ≤s ≤2. We can show that the expected number of context ranking errors satisfies E[T (s)] = O(JK log T), s = 1, 2, implying that R(c) UCB−ALP(T, B) = O(JK log T). Summarizing the two parts, we have RUCB−ALP(T, B) = O(JK log T) for the non-boundary cases. The regret for the boundary cases can be bounded using similar arguments. Key Insights from UCB-ALP: Constrained contextual bandits involve complicated interactions between information acquisition and decision making. UCB-ALP alleviates these interactions by 7 approximating the oracle with ALP for decision making. This approximation achieves near-optimal performance while tolerating certain estimation errors of system statistics, and thus enables the combination with estimation methods such as UCB in unknown statistics cases. Moreover, the adaptation property of UCB-ALP guarantees the concentration property of the system status, e.g., bτ/τ. This allows us to separately study the impact of action or context ranking errors and conduct rigorous analysis of regret. These insights can be applied in algorithm design and analysis for constrained contextual bandits under more general settings. 5 Bandits with Unknown Context Distribution When the context distribution is unknown, a reasonable heuristic is to replace the probability πj in ALP with its empirical estimate, i.e., ˆπj(t) = 1 t Pt t′=1 1(Xt′ = j). We refer to this modified ALP algorithm as Empirical ALP (EALP), and its combination with UCB as UCB-EALP. The empirical distribution provides a maximum likelihood estimate of the context distribution and the EALP and UCB-EALP algorithms achieve similar performance as ALP and UCB-ALP, respectively, as observed in numerical simulations. However, a rigorous analysis for EALP and UCBEALP is much more challenging due to the dependency introduced by the empirical distribution. To tackle this issue, our rigorous analysis focuses on a truncated version of EALP where we stop updating the empirical distribution after a given round. Using the method of bounded averaged differences based on coupling argument, we obtain the concentration property of the average remaining budget bτ/τ, and show that this truncated EALP algorithm achieves O(1) regret except for the boundary cases. The regret of the corresponding UCB-based version can by bounded similarly as UCB-ALP. 6 Bandits with Heterogeneous Costs The insights obtained from unit-cost systems can also be used to design algorithms for heterogeneous cost systems where the cost cj,k depends on j and k. We generalize the ALP algorithm to approximate the oracle, and adjust it to the case with unknown expected rewards. For simplicity, we assume the context distribution is known here, while the empirical estimate can be used to replace the actual context distribution if it is unknown, as discussed in the previous section. With heterogeneous costs, the quality of an action k under a context j is roughly captured by its normalized expected reward, defined as ηj,k = uj,k/cj,k. However, the agent cannot only focus on the “best” action, i.e., k∗ j = arg maxk∈A ηj,k, for context j. This is because there may exist another action k′ such that ηj,k′ < ηj,k∗ j , but uj,k′ > uj,k∗ j (and of course, cj,k′ > cj,k∗ j ). If the budget allocated to context j is sufficient, then the agent may take action k′ to maximize the expected reward. Therefore, to approximate the oracle, the ALP algorithm in this case needs to solve an LP problem accounting for all context-action pairs with an additional constraint that only one action can be taken under each context. By investigating the structure of ALP in this case and the concentration of the remaining budget, we show that ALP achieves O(1) regret in non-boundary cases, and O( √ T) regret in boundary cases. Then, an ϵ-First ALP algorithm is proposed for the unknown statistics case where an exploration stage is implemented first and then an exploitation stage is implemented according to ALP. 7 Conclusion In this paper, we study computationally-efficient algorithms that achieve logarithmic or sublinear regret for constrained contextual bandits. Under simplified yet practical assumptions, we show that the close interactions between the information acquisition and decision making in constrained contextual bandits can be decoupled by adaptive linear relaxation. When the system statistics are known, the ALP approximation achieves near-optimal performance, while tolerating certain estimation errors of system parameters. When the expected rewards are unknown, the proposed UCB-ALP algorithm leverages the advantages of ALP and UCB, and achieves O(log T) regret except for certain boundary cases, where it achieves O( √ T) regret. Our study provides an efficient approach of dealing with the challenges introduced by budget constraints and could potentially be extended to more general constrained contextual bandits. Acknowledgements: This research was supported in part by NSF Grants CCF-1423542, CNS1457060, CNS-1547461, and AFOSR MURI Grant FA 9550-10-1-0573. 8 References [1] J. Langford and T. Zhang. The epoch-greedy algorithm for contextual multi-armed bandits. In Advances in Neural Information Processing Systems (NIPS), pages 817–824, 2007. [2] T. Lu, D. P´al, and M. P´al. Contextual multi-armed bandits. In International Conference on Artificial Intelligence and Statistics, pages 485–492, 2010. [3] L. Zhou. A survey on contextual multi-armed bandits. arXiv preprint arXiv:1508.03326, 2015. [4] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47(2-3):235–256, 2002. [5] L. Li, W. Chu, J. Langford, and R. E. Schapire. A contextual-bandit approach to personalized news article recommendation. In ACM International Conference on World Wide Web (WWW), pages 661–670, 2010. [6] A. Slivkins. Contextual bandits with similarity information. The Journal of Machine Learning Research, 15(1):2533–2568, 2014. [7] A. Agarwal, D. Hsu, S. Kale, J. Langford, L. Li, and R. E. Schapire. Taming the monster: A fast and simple algorithm for contextual bandits. In International Conference on Machine Learning (ICML), 2014. [8] P. Auer and R. Ortner. Logarithmic online regret bounds for undiscounted reinforcement learning. In Advances in Neural Information Processing Systems (NIPS), pages 49–56, 2007. [9] A. Badanidiyuru, R. Kleinberg, and Y. Singer. Learning on a budget: posted price mechanisms for online procurement. In ACM Conference on Electronic Commerce, pages 128–145, 2012. [10] T. L. Lai and O. Y.-W. Liao. Efficient adaptive randomization and stopping rules in multi-arm clinical trials for testing a new treatment. Sequential Analysis, 31(4):441–457, 2012. [11] L. Tran-Thanh, A. C. Chapman, A. Rogers, and N. R. Jennings. Knapsack based optimal policies for budget-limited multi-armed bandits. In AAAI Conference on Artificial Intelligence, 2012. [12] A. Badanidiyuru, R. Kleinberg, and A. Slivkins. Bandits with knapsacks. In IEEE 54th Annual Symposium on Foundations of Computer Science (FOCS), pages 207–216, 2013. [13] C. Jiang and R. Srikant. Bandits with budgets. In IEEE 52nd Annual Conference on Decision and Control (CDC), pages 5345–5350, 2013. [14] A. Slivkins. Dynamic ad allocation: Bandits with budgets. arXiv preprint arXiv:1306.0155, 2013. [15] Y. Xia, H. Li, T. Qin, N. Yu, and T.-Y. Liu. Thompson sampling for budgeted multi-armed bandits. In International Joint Conference on Artificial Intelligence, 2015. [16] R. Combes, C. Jiang, and R. Srikant. Bandits with budgets: Regret lower bounds and optimal algorithms. In ACM Sigmetrics, 2015. [17] A. Badanidiyuru, J. Langford, and A. Slivkins. Resourceful contextual bandits. In Conference on Learning Theory (COLT), 2014. [18] R. Combes, A. Proutiere, D. Yun, J. Ok, and Y. Yi. Optimal rate sampling in 802.11 systems. In IEEE INFOCOM, pages 2760–2767, 2014. [19] M. H. Veatch. Approximate linear programming for average cost MDPs. Mathematics of Operations Research, 38(3):535–544, 2013. [20] S. Agrawal and N. R. Devanur. Bandits with concave rewards and convex knapsacks. In ACM Conference on Economics and Computation, pages 989–1006. ACM, 2014. [21] S. Agrawal, N. R. Devanur, and L. Li. Contextual bandits with global constraints and objective. arXiv preprint arXiv:1506.03374, 2015. [22] S. Agrawal and N. R. Devanur. Linear contextual bandits with global constraints and objective. arXiv preprint arXiv:1507.06738, 2015. [23] D. P. Dubhashi and A. Panconesi. Concentration of measure for the analysis of randomized algorithms. Cambridge University Press, 2009. [24] A. Garivier and O. Capp´e. The KL-UCB algorithm for bounded stochastic bandits and beyond. In Conference on Learning Theory (COLT), pages 359–376, 2011. [25] D. Golovin and A. Krause. Dealing with partial feedback, 2009. [26] T. L. Lai and H. Robbins. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1):4–22, 1985. 9 | 2015 | 81 |
5,974 | Latent Bayesian melding for integrating individual and population models Mingjun Zhong, Nigel Goddard, Charles Sutton School of Informatics University of Edinburgh United Kingdom {mzhong,nigel.goddard,csutton}@inf.ed.ac.uk Abstract In many statistical problems, a more coarse-grained model may be suitable for population-level behaviour, whereas a more detailed model is appropriate for accurate modelling of individual behaviour. This raises the question of how to integrate both types of models. Methods such as posterior regularization follow the idea of generalized moment matching, in that they allow matching expectations between two models, but sometimes both models are most conveniently expressed as latent variable models. We propose latent Bayesian melding, which is motivated by averaging the distributions over populations statistics of both the individual-level and the population-level models under a logarithmic opinion pool framework. In a case study on electricity disaggregation, which is a type of singlechannel blind source separation problem, we show that latent Bayesian melding leads to significantly more accurate predictions than an approach based solely on generalized moment matching. 1 Introduction Good statistical models of populations are often very different from good models of individuals. As an illustration, the population distribution over human height might be approximately normal, but to model an individual’s height, we might use a more detailed discriminative model based on many features of the individual’s genotype. As another example, in social network analysis, simple models like the preferential attachment model [3] replicate aggregate network statistics such as degree distributions, whereas to predict whether two individuals have a link, a social networking web site might well use a classifier with many features of each person’s previous history. Of course every model of an individual implies a model of the population, but models whose goal is to model individuals tend to be necessarily more detailed. These two styles of modelling represent different types of information, so it is natural to want to combine them. A recent line of research in machine learning has explored the idea of incorporating constraints into Bayesian models that are difficult to encode in standard prior distributions. These methods, which include posterior regularization [9], learning with measurements [16], and the generalized expectation criterion [18], tend to follow a moment matching idea, in which expectations of the distribution of one model are encouraged to match values based on prior information. Interestingly, these ideas have precursors in the statistical literature on simulation models. In particular, Bayesian melding [21] considers applications in which there is a computer simulation M that maps from model parameters θ to a quantity φ = M(θ). For example, M might summarize the output of a deterministic simulation of population dynamics or some other physical phenomenon. Bayesian melding considers the case in which we can build meaningful prior distributions over both θ and φ. These two prior distributions need to be merged because of the deterministic relationship; 1 this is done using a logarithmic opinion pool [5]. We show that there is a close connection between Bayesian melding and the later work on posterior regularization, which does not seem to have been recognized in the machine learning literature. We also show that Bayesian melding has the additional advantage that it can be conveniently applied when both individual-level and population-level models contain latent variables, as would commonly be the case, e.g., if they were mixture models or hierarchical Bayesian models. We call this approach latent Bayesian melding. We present a detailed case study of latent Bayesian melding in the domain of energy disaggregation [11, 20], which is a particular type of blind source separation (BSS) problem. The goal of the electricity disaggregation problem is to separate the total electricity usage of a building into a sum of source signals that describe the energy usage of individual appliances. This problem is hard because the source signals are not identifiable, which motivates work that adds additional prior information into the model [14, 15, 20, 25, 26, 8]. We show that the latent Bayesian melding approach allows incorporation of new types of constraints into standard models for this problem, yielding a strong improvement in performance, in some cases amounting to a 50% error reduction over a moment matching approach. 2 The Bayesian melding approach We briefly describe the Bayesian melding approach to integrating prior information in deterministic simulation models [21], which has seen wide application [1, 6, 23]. In the Bayesian modelling context, denote Y as the observation data, and suppose that the model includes unknown variables S, which could include model parameters and latent variables. We are then interested in the posterior p(S|Y ) = p(Y )−1p(Y |S)pS(S). (1) However, in some situations, the variables S may be related to a new random variable τ by a deterministic simulation function f(·) such that τ = f(S). We call S and τ input and output variables. For example, in the energy disaggregation problem, the total energy consumption variable τ = PT t=1 ST t µ where St are the state variables of a hidden Markov model (one-hot encoding) and µ is a vector containing the mean energy consumption of each state (see Section 5.2). Both τ and S are random variables, and so in the Bayesian context, the modellers usually choose appropriate priors pτ(τ) and pS(S) based on prior knowledge. However, given pS(S), the map f naturally introduces another prior for τ, which is an induced prior denoted by p∗ τ(τ). Therefore, there are two different priors for the same variable τ from different sources, which might not be consistent. In the energy disaggregation example, p∗ τ is induced by the state variables St of the hidden Markov model which is the individual model of a specific household, and pτ could be modelled by using population information, e.g. from a national survey — we can think of this as a population model since it combines information from many households. The Bayesian melding approach combines the two priors into one by using the logarithmic pooling method so that the logarithmically pooled prior is epτ(τ) ∝p∗ τ(τ)αpτ(τ)1−α where 0 ≤α ≤1. The prior epτ melds the prior information of both S and τ. In the model (1), the prior pS does not include information about τ. Thus it is required to derive a melded prior for S. If f is invertible, the prior for S can be obtained by using the change-of-variable technique. If f is not invertible, Poole and Raftery [21] heuristically derived a melded prior epS(S) = cαpS(S) pτ(f(S)) p∗τ(f(S)) 1−α (2) where cα is a constant given α such that R epS(S)dS = 1. This gives a new posterior ep(S|Y ) = ep(Y )−1p(Y |S)epS(S). Note that it is interesting to infer α [22, 7], however we use a fixed value in this paper. So far we have been assuming there are no latent variables in pτ. We now consider the situation when τ is generated by some latent variables. 3 The latent Bayesian melding approach It is common that the variable τ is modelled by a latent variable ξ, see the examples in Section 5.2. So we could assume that we have a conditional distribution p(τ|ξ) and a prior distribution pξ(ξ). This defines a marginal distribution pτ(τ) = R pξ(ξ)p(τ|ξ)dξ. This could be used to produce the 2 melded prior (2) of the Bayesian melding approach epS(S) = cαpS(S) R pτ(f(S)|ξ)pξ(ξ)dξ p∗τ(f(S)) 1−α . (3) The integration in (3) is generally intractable. We could employ the Monte Carlo method to approximate it for a fixed τ. However, importantly we are also interested in inferring the latent variable ξ which is meaningful for example in the energy disaggregation problem. When we are interested in finding the maximum a posteriori (MAP) value of the posterior where epS(S) was used as the prior, we propose to use a rough approximation R pξ(ξ)pτ(τ|ξ)dξ ≈maxξ pξ(ξ)pτ(τ|ξ). This leads to an approximate prior epS(S) ≈max ξ epS,ξ(S, ξ) = max ξ cαpS(S) pτ(f(S)|ξ)pξ(ξ) p∗τ(f(S)) 1−α . (4) To obtain this approximate prior for S, the joint prior epS,ξ(S, ξ) has to exist, and so we show that it does exist under certain conditions by the following theorem. We assume that S and ξ are continuous random variables, and that both p∗ τ and pτ are positive and share the same support. Also, EpS(S)[·] denotes the expectation with respect to pS. Theorem 1. If EpS(S) h pτ (f(S)) p∗τ (f(S)) i < ∞, then a constant cα < ∞exists such that R epS,ξ(S, ξ)dξdS = 1, for any fixed α ∈[0, 1]. The proof can be found in the supplementary materials. In (4) we heuristically derived an approximate joint prior epS,ξ. Interestingly, if ξ and S are independent conditional on τ, we can show as follows that epS,ξ is a limit distribution derived from a joint distribution of ξ and S induced by τ. To see this, we derive a joint prior for S and ξ, pS,ξ(S, ξ) = Z p(S, ξ|τ)pτ(τ)dτ = Z p(S|τ)p(ξ|τ)pτ(τ)dτ = Z p(τ|S)pS(S) p∗τ(τ) p(τ|ξ)pξ(ξ) pτ(τ) pτ(τ)dτ = pS(S)pξ(ξ) Z p(τ|S)p(τ|ξ) p∗τ(τ) dτ. For a deterministic simulation τ = f(S), the distribution p(τ|S) = p(τ|S, τ = f(S)) is ill-defined due to the Borel’s paradox [24]. The distribution p(τ|S) depends on the parameterization. We assume that τ is uniform on [f(S) −δ, f(S) + δ] conditional on S and δ > 0, and the distribution is then denoted by pδ(τ|S). The marginal distribution is pδ(τ) = R pδ(τ|S)pS(S)dS. Denote g(τ) = p(τ|ξ) p∗τ (τ) and gδ(τ) = p(τ|ξ) pδ(τ) . Then we have the following theorem. Theorem 2. If limδ→0 pδ(τ) = p∗ τ(τ), and gδ(τ) has bounded derivatives in any order, then limδ→0 R pδ(τ|S)gδ(τ)dτ = g(f(S)). See the supplementary materials for the proof. Under this parameterization, we denote ˆpS,ξ(S, ξ) = pS(S)pξ(ξ) limδ→0 R pδ(τ|S)gδ(τ)dτ = pS(S)pξ(ξ) p(f(S)|ξ) p∗τ (f(S)) . By applying the logarithmic pooling method, we have a joint prior epS,ξ(S, ξ) = cα (pS(S))α (ˆpS,ξ(S, ξ))1−α = cαpS(S) pτ(f(S)|ξ)pξ(ξ) p∗τ(f(S)) 1−α . Since the joint prior blends the variable S and the latent variable ξ, we call this approximation the latent Bayesian melding (LBM) approach, which gives the posterior ep(S, ξ|Y ) = ep(Y )−1p(Y |S)epS,ξ(S, ξ). Note that if there are no latent variables, then latent Bayesian melding collapses to the Bayesian melding approach. In section 6 we will apply this method to an energy disaggregation problem for integrating population information with an individual model. 4 Related methods We now discuss possible connections between Bayesian melding (BM) and other related methods. Recently in machine learning, moment matching methods have been proposed, e.g., posterior regularization (PR) [9], learning with measurements [16] and the generalized expectation criterion [18]. 3 These methods share the common idea that the Bayesian models (or posterior distributions) are constrained by some observations or measurements to obtain a least-biased distribution. The idea is that the system we are modelling is too complex and unobservable, and thus we have limited prior information. To alleviate this problem, we assume we can obtain some observations of the system in some way, e.g., by experiments, for example those observations could be the mean values of the functions of the variables. Those observations could then guide the modelling of the system. Interestingly, a very similar idea has been employed in the bias correction method in information theory and statistics [12, 10, 19], where the least-biased distribution is obtained by optimizing the Kullback-Leibler divergence subject to the moment constraints. Note that the bias correction method in [17] is different to others where the bias of a consistent estimator was corrected when the bias function could be estimated. We now consider the posteriors derived by PR and BM. In general, given a function f(S) and values bi, PR solves the constrained problem minimize ep KL(ep(S)||p(S|Y )) subject to Eep (mi(f(S))) −bi ≤δi, ||δi|| ≤ϵ; i = 1, 2, · · · , I. where mi could be any function such as a power function. This gives an optimal posterior epP R(S) = Z(λ)−1p(Y |S)p(S) QI i=1 exp(−λimi(f(S))) where Z(λ) is the normalizing constant. BM has a deterministic simulation f(S) = τ where τ ∼pτ. The posterior is then epBM(S) = Z(α)−1p(Y |S)p(S) pτ (f(S)) p∗τ (f(S)) 1−α . They have a similar form and the key difference is the last factor which is derived from the constraints or the deterministic simulation. epP R and epBM are identical, if −PI i=1 λimi(f(S)) = (1 −α) log pτ (f(S)) p∗τ (f(S)). The difference between BM and LBM is the latent variable ξ. We could perform BM by integrating out ξ in (3), but this is computationally expensive. Instead, LBM jointly models S and ξ allowing possibly joint inference, which is an advantage over BM. 5 The energy disaggregation problem In energy disaggregation, we are given a time series of energy consumption readings from a sensor. We consider the energy measured in watt hours as read from a household’s electricity meter, which is denoted by Y = (Y1, Y2, · · · , YT ) where Yt ∈R+. The recorded energy signal Y is assumed to be the aggregation of the consumption of individual appliances in the household. Suppose there are I appliances, and the energy consumption of each appliance is denoted by Xi = (Xi1, Xi2, · · · , XiT ) where Xit ∈R+. The observed aggregate signal is assumed to be the sum of the component signals so that Yt = PI i=1 Xit + ϵt where ϵt ∼N(0, σ2). Given Y , the task is to infer the unknown component signals Xi. This is essentially the single-channel BSS problem, for which there is no unique solution. It can also be useful to add an extra component U = (U1, U2, · · · , UT ) to model the unknown appliances to make the model more robust as proposed in [15]. The prior of Ut is defined as p(U) = 1 v2(T −1) exp n −1 2v2 PT −1 t=1 |Ut+1 −Ut| o . The model then has a new form Yt = PI i=1 Xit + Ut + ϵt. A natural way to represent this model is as an additive factorial hidden Markov model (AFHMM) where the appliances are treated as HMMs [15, 20, 26]; this is now described. 5.1 The additive factorial hidden Markov model In the AFHMM, each component signal Xi is represented by a HMM. We suppose there are Ki states for each Xit, and so the state variable is denoted by Zit ∈{1, 2, · · · , Ki}. Since Xi is a HMM, the initial probabilities are πik = P(Zi1 = k) (k = 1, 2, · · · , Ki) where PKi k=1 πik = 1; the mean values are µi = {µ1, µ2, · · · , µKi} such that Xit ∈µi; the transition probabilities are P (i) = (p(i) jk ) where p(i) jk = P(Zit = j|Zi,t−1 = k) and PKi j=1 p(i) jk = 1. We denote all these parameters {πi, µi, P (i)} by θ. We assume they are known and can be learned from the training data. Instead of using Z, we could use a binary vector Sit = (Sit1, Sit2, · · · , SitKi)T to represent the variable Z such that Sitk = 1 when Zit = k and for all Sitj = 0 when j ̸= k. Then we are interested in inferring the states Sit instead of inferring Xit directly, since Xit = ST itµi. Therefore 4 we want to make inference over the posterior distribution P(S, U, σ2|Y, θ) ∝ p(Y |S, U, σ2)P(S|θ)p(U)p(σ2) where the HMM defines the prior of the states P(S|θ) ∝ QI i=1 QKi k=1 πSi1k ik × QT t=2 QI i=1 Q k,j p(i) kj SitkSi,t−1,j , the inverse noise variance is assumed to be a Gamma distribution p(σ−2) ∝(σ−2)α−1 exp −βσ−2 , and the data likelihood has the Gaussian form p(Y |S, U, σ2, θ) = |2πσ2|−T 2 exp −1 2σ2 PT t=1 Yt −PI i=1 ST itµi −Ut 2 . To make the MAP inference over S, we relax the binary variable Sitk to be continuous in the range [0, 1] as in [15, 26]. It has been shown that incorporating domain knowledge into AFHMM can help to reduce the identifiability problem [15, 20, 26]. The domain knowledge we will incorporate using LBM is the summary statistics. 5.2 Population modelling of summary statistics In energy disaggregation, it is useful to provide a summaries of energy consumption to the users. For example, it would be useful to show the householders the total energy they had consumed in one day for their appliances, the duration that each appliance was in use, and the number of times that they had used these appliances. Since there already exists data about typical usage of different appliances [4], we can employ these data to model the distributions of those summary statistics. We denote those desired statistics by τ = {τi}I i=1, where i denotes the appliances. For appliance i, we assume we have measured some time series from different houses for many days. This is always possible because we can collect them from public data sets, e.g., the data reviewed in [4]. We can then empirically obtain the distributions of those statistics. The distribution is represented by pm(τim|Γim, ηim) where Γim represents the empirical quantities of the statistic m of the appliance i which can be obtained from data and ηim are the latent variables which might not be known. Since ηim are variables, we can employ a prior distribution p(ηim). We now give some examples of those statistics. Total energy consumption: The total energy consumption of an appliance can be represented as a function of the states of HMM such that τi = PT t=1 ST itµi. Duration of appliance usage: The duration of using the appliance i can also be represented as a function of states τi = ∆t PT t=1 PKi k=2 Sitk where ∆t represents the sampling duration for a data point of the appliances, and we assume that Sit1 represents the off state which means the appliance was turned off. Number of cycles: The number of cycles (the number of times an appliance is used) can be counted by computing the number of alterations from OFF state to ON such that τi = PT t=2 PKi k=2 I(Sitk = 1, Si,t−1,1 = 0). Let the binary vector ξi = (ξi1, ξi2, · · · , ξic, · · · , ξiCi) represent the number of cycles, where ξic = 1 means that the appliance i had been used c cycles, and PCi c=1 ξic = 1. (Note ξi is an example of ηi in this case.) To model these statistics in our LBM framework, the latent variable that we use is the number of cycles ξ. The distributions of τi could be empirically modelled by using the observation data. One approach is to assume a Gaussian mixture density such that p(τi|ξi) = PCi c=1 p(ξic = 1)pc(τi|Γi), where PCi c=1 p(ξic = 1) = 1 and pc is the Gaussian component density. Using the mixture Gaussian, we basically assume that, for an appliance, given the number of cycles the total energy consumption is modelled by a Gaussian with mean µic and variance σ2 ic. A simpler model would be a linear regression model such that τi = PCi c=1 ξicµic + ϵi where ϵi ∼N(0, σ2 i ). This model assumes that given the number of cycles the total energy consumption is close to the mean µic. The mixture model is more appropriate than the regression model, but the inference is more difficult. When τi represents the number of cycles for appliance i, we can use τi = PCi c=1 cicξic where cic represents the number of cycles. When the state variables Si are relaxed to [0, 1], we can then employ a noise model such that τi = PCi c=1 cicξic + ϵi where ϵ ∼N(0, σ2 i ). We model ξi with a discrete distribution such that P(ξi) = QCi c=1 pξic ic where pic represents the prior probability of the number of cycles for the appliance i, which can be obtained from the training data. We now show that how to use the LBM to integrate the AFHMM with these population distributions. 5 6 The latent Bayesian melding approach to energy disaggregation We have shown that the summary statistics τ can be represented as a deterministic function of the state variable of HMMs S such that τ = f(S), which means that the τ itself can be represented as a latent variable model. We could then straightforwardly employ the LBM to produce a joint prior over S and ξ such that epS,ξ(S, ξ) = cαpS(S) pτ (f(S)|ξ)p(ξ) p∗τ (f(S)) 1−α . Since in our model f is not invertible, we need to generate a proper density for p∗ τ. One possible way is to generate N random samples {S(n)}N n=1 from the prior pS(S) which is a HMM, and then p∗ τ can be modelled by using kernel density estimation. However, this will make the inference difficult. Instead, we employ a Gaussian density p∗ τim(τim) = N(ˆµim, ˆσ2 im) where ˆµim and ˆσ2 im are computed from {S(n)}N n=1. The new posterior distribution of LBM thus has the form p(S, U, Σ|Y, θ) ∝ p(Σ)p(U)epS,ξ(S, ξ)p(Y |S, U, σ2) = p(Σ)p(U)cαpS(S) pτ(f(S)|ξ)p(ξ) p∗τ(f(S)) 1−α p(Y |S, U, σ2) where Σ represents the collection of all the noise variances. All the inverse noise variances employ the Gamma distribution as the prior. We are interested in inferring the MAP values. Since the variables S and ξ are binary, we have to solve a combinatorial optimization problem which is intractable, so we solve a relaxed problem as in [15, 26]. Since log pS(S) is not convex, we employ the relaxation method of [15]. So a new Ki×Ki variable matrix Hit = (hit jk) is introduced such that hit jk = 1 when Si,t−1,k = 1 and Sitj = 1 and otherwise hit jk = 0. Under these constraints, we then obtain log pS(S) = log p(S, H) = PI i=1 ST i1 log πi + P i,t,k,j hit jk log p(i) jk ; this is now linear. We optimize the log-posterior which is denoted by L(S, H, U, Σ, ξ). The constraints for those variables are represented as sets QS = nPKi k=1 Sitk = 1, Sitk ∈[0, 1], ∀i, t o , Qξ = nPCi c=1 ξic = 1, ξic ∈[0, 1], ∀i o , QH,S = nPKi l=1 Hit l. = ST i,t−1, PKi l=1 Hit .l = Sit, hit jk ∈[0, 1], ∀i, t o , and QU,Σ = U ≥0, Σ ≥0, σ2 im < ˆσ2 im, ∀i, m . Denote Q = QS ∪Qξ ∪QH,S ∪QU,Σ. The relaxed optimization problem is then maximize S,H,U,Σ,ξ L(S, H, U, Σ, ξ) subject to Q. We oberved that every term in L is either quadratic or linear when Σ are fixed, and the solutions for Σ are deterministic when the other variables are fixed. The constraints are all linear. Therefore, we optimize Σ while fixing all the other variables, and then optimize all the other variables simultaneously while fixing Σ. This optimization problem is then a convex quadratic program (CQP), for which we use MOSEK [2]. We denote this method by AFHMM+LBM. 7 Experimental results We have incorporated population information into the AFHMM by employing the latent Bayesian melding approach. In this section, we apply the proposed model to the disaggregation problem. We will compare the new approach with the AFHMM+PR [26] using the set of statistics τ described in Section 5.2. The key difference between our method AFHMM+LBM and AFHMM+PR is that AFHMM+LBM models the statistics τ conditional on the number of cycles ξ. 7.1 The HES data We apply AFHMM, AFHMM+PR and AFHMM+LBM to the Household Electricity Survey (HES) data1. This data set was gathered in a recent study commissioned by the UK Department of Food and Rural Affairs. The study monitored 251 households, selected to be representative of the population, across England from May 2010 to July 2011 [27]. Individual appliances were monitored, and in some households the overall electricity consumption was also monitored. The data were monitored 1The HES dataset and information on how the raw data was cleaned can be found from https://www.gov.uk/government/publications/household-electricity-survey. 6 Table 1: Normalized disaggregation error (NDE), signal aggregate error (SAE), duration aggregate error (DAE), and cycle aggregate error (CAE) by AFHMM+PR and AFHMM+LBM on synthetic mains in HES data. METHODS NDE SAE DAE CAE TIME (S) AFHMM 1.45± 0.88 1.42± 0.39 1.56±0.23 1.41±0.31 179.3±1.9 AFHMM+PR 0.87± 0.21 0.86± 0.39 0.83±0.53 1.57±0.66 195.4±3.2 AFHMM+LBM 0.89± 0.49 0.87± 0.37 0.76±0.32 0.79±0.35 198.1±3.1 Table 2: Normalized disaggregation error (NDE), signal aggregate error (SAE), duration aggregate error (DAE), and cycle aggregate error (CAE) by AFHMM+PR and AFHMM+LBM on mains in HES data. METHODS NDE SAE DAE CAE TIME (S) AFHMM 1.90±1.16 2.26±0.86 1.91±0.67 1.12 ±0.17 170.8±33.3 AFHMM+PR 0.91±0.11 0.67± 0.07 0.68± 0.18 1.65 ±0.49 214.2±38.1 AFHMM+LBM 0.77±0.23 0.68± 0.19 0.61± 0.22 0.98±0.32 224.8±34.8 every 2 or 10 minutes for different houses. We used only the 2-minute data. We then used the individual appliances to train the model parameters θ of the AFHMM, which will be used as the input to the models for disaggregation. Note that we assumed the HMMs have 3 states for all the appliances. This number of states is widely applied in energy disaggregation problems, though our method could easily be applied to larger state spaces. In the HES data, in some houses the overall electricity consumption (the mains) was monitored. However, in most houses, only a subset of individual appliances were monitored, and the total electricity readings were not recorded. Generating the population information: Most of the houses in HES did not monitor the mains readings. They all recorded the individual appliances consumption. We used a subset of the houses to generate the population information of the individual appliances. We used the population information of total energy consumption, duration of appliance usage and the number of cycles in a time period. In our experiments, the time period was one day. We modelled the distributions of these summary statistics by using the methods described in the Section 5.2, where the distributions were Gaussian. All the required quantities for modelling these distributions were generated by using the samples of the individual appliances. Houses without mains readings: In this experiment, we randomly selected one hundred households, and one day’s usage was used as test data for each household. Since no mains readings were monitored in these houses, we added up the appliance readings to generate synthetic mains readings. We then applied the AFHMM, AFHMM+PR and AFHMM+LBM to these synthetic mains to predict the individual appliance usage. To compare these three methods, we employed four error measures. Denote ˆxi as the inferred signal for the appliance usage xi. One measure is the normalized disaggregation error (NDE): P it(xit−ˆxit)2 P it x2 it . This measures how well the method predicts the energy consumption at every time point. However, the householders might be more interested in the summaries of the appliance usage. For example, in a particular time period, e.g, one day, people are interested in the total energy consumption of the appliances, the total time they have been using those appliances and how many times they have used them. We thus employ 1 I PI i=1 |ˆri−ri| P i ri as the signal aggregate error (SAE), the duration aggregate error (DAE) or the cycle aggregate error (CAE), where ri represents the total energy consumption, the duration or the number of cycles, respectively, and ˆri represents the predicted summary statistics. All the methods were applied to the synthetic data. Table 1 shows the overall error computed by these methods. We see that both the methods using prior information improved over the base line method AFHMM. The AFHMM+PR and AFHMM+LBM performed similarly in terms of NDE and SAE, but AFHMM+LBM improved over AFHMM+PR in terms of DAE (8%) and CAE (50%). Houses with mains readings: We also applied those methods to 6 houses which have mains readings. We used 10 days data for each house, and the recorded mains readings were used as the input to the models. All the methods were used to predict the appliance consumption. Table 2 shows the 7 Table 3: Normalized disaggregation error (NDE), signal aggregate error (SAE), duration aggregate error (DAE), and cycle aggregate error (CAE) by AFHMM+PR and AFHMM+LBM on UK-DALE data. METHODS NDE SAE DAE CAE TIME (S) AFHMM 1.57±1.16 1.99±0.52 2.81±0.79 1.37 ± 0.28 118.6±23.1 AFHMM+PR 0.83±0.27 0.82± 0.38 1.68± 1.21 1.90 ±0.52 120.4±25.3 AFHMM+LBM 0.84±0.25 0.89± 0.38 0.49± 0.33 0.59±0.21 123.1±25.8 error of each house and also the overall errors. This experiment is more realistic than the synthetic mains readings, since the real mains readings were used as the input. We see that both the methods incorporating prior information have improved over the AFHMM in terms of NDE, SAE and DAE. The AFHMM+PR and AFHMM+LBM have the similar results for SAE. AFHMM+LBM is improved over AFHMM+PR for NDE (15%), DAE (10%) and CAE (40%). 7.2 UK-DALE data In the previous section we have trained the model using the HES data, and applied the models to different houses of the same data set. A more realistic situation is to train the model in one data set, and apply the model to a different data set, because it is unrealistic to expect to obtain appliancelevel data from every household on which the system will be deployed. In this section, we use the HES data to train the model parameters of the AFHMM, and model the distribution of the summary statistics. We then apply the models to the UK-DALE dataset [13], which was also gathered from UK households, to make the predictions. There are five houses in UK-DALE, and all of them have mains readings and as well as the individual appliance readings. All the mains meters were sampled every 6 seconds and some of them also sampled at a higher rate, details of the data and how to access it can be found in [13]. We employ three of the houses for analysis in our experiments (houses 1, 2 & 5 in the data). The other two houses were excluded because the correlation between the sum of submeters and mains is very low, which suggests that there might be recording errors in the meters. We selected 7 appliances for disaggregation, based on those that typically use the most energy. Since the sample rate of the submeters in the HES data is 2 minutes, we downsampled the signal from 6 seconds to 2 minutes for the UK-DALE data. For each house, we randomly selected a month for analysis. All the four methods were applied to the mains readings. For comparison purposes, we computed the NDE, SAE, DAE and CAE errors of all three methods, averaged over 30 days. Table 3 shows the results. The results are consistent with the results of the HES data. Both the AFHMM+PR and AFHMM+LBM improve over the basic AFHMM, except that AFHMM+PR did not improve the CAE. As for HES testing data, AFHMM+PR and AFHMM+LBM have similar results on NDE and SAE. And AFHMM+LBM again improved over AFHMM+PR in DAE (70%) and CAE (68%). These results are consistent in suggesting that incorporating population information into the model can help to reduce the identifiability problem in single-channel BSS problems. 8 Conclusions We have proposed a latent Bayesian melding approach for incorporating population information with latent variables into individual models, and have applied the approach to energy disaggregation problems. The new approach has been evaluated by applying it to two real-world electricity data sets. The latent Bayesian melding approach has been compared to the posterior regularization approach (a case of the Bayesian melding approach) and AFHMM. Both the LBM and PR have significantly lower error than the base line method. LBM improves over PR in predicting the duration and the number of cycles. Both methods were similar in NDE and the SAE errors. Acknowledgments This work is supported by the Engineering and Physical Sciences Research Council, UK (grant numbers EP/K002732/1 and EP/M008223/1). 8 References [1] Leontine Alkema, Adrian E Raftery, and Samuel J Clark. Probabilistic projections of HIV prevalence using Bayesian melding. The Annals of Applied Statistics, pages 229–248, 2007. [2] MOSEK ApS. The MOSEK optimization toolbox for Python manual. Version 7.1 (Revision 28), 2015. [3] Albert-Laszlo Barabasi and Reka Albert. Emergence of scaling in random networks. Science, 286(5439):509–512, 1999. [4] N. Batra et al. Nilmtk: An open source toolkit for non-intrusive load monitoring. In Proceedings of the 5th International Conference on Future Energy Systems, pages 265–276, New York, NY, USA, 2014. [5] Robert F. Bordley. A multiplicative formula for aggregating probability assessments. Management Science, 28(10):1137–1148, 1982. [6] Grace S Chiu and Joshua M Gould. Statistical inference for food webs with emphasis on ecological networks via Bayesian melding. Environmetrics, 21(7-8):728–740, 2010. [7] Luiz Max F de Carvalhoa, Daniel AM Villelaa, Flavio Coelhoc, and Leonardo S Bastosa. On the choice of the weights for the logarithmic pooling of probability distributions. September 24, 2015. [8] E. Elhamifar and S. Sastry. Energy disaggregation via learning powerlets and sparse coding. In Proceedings of the Twenty-Ninth Conference on Artificial Intelligence (AAAI), pages 629–635, 2015. [9] K. Ganchev, J. Grac¸a, J. Gillenwater, and B. Taskar. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11:2001–2049, 2010. [10] A. Giffin and A. Caticha. Updating probabilities with data and moments. The 27th Int. Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, NY, July 8-13,2007. [11] G.W. Hart. Nonintrusive appliance load monitoring. Proceedings of the IEEE, 80(12):1870 –1891, 1992. [12] Edwin T Jaynes. Information theory and statistical mechanics. Physical review, 106(4):620, 1957. [13] Jack Kelly and William Knottenbelt. The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes. 2(150007), 2015. [14] H. Kim, M. Marwah, M. Arlitt, G. Lyon, and J. Han. Unsupervised disaggregation of low frequency power measurements. In Proceedings of the SIAM Conference on Data Mining, pages 747–758, 2011. [15] J. Z. Kolter and T. Jaakkola. Approximate inference in additive factorial HMMs with application to energy disaggregation. In Proceedings of AISTATS, volume 22, pages 1472–1482, 2012. [16] P. Liang, M.I. Jordan, and D. Klein. Learning from measurements in exponential families. In The 26th Annual International Conference on Machine Learning, pages 641–648, 2009. [17] James G MacKinnon and Anthony A Smith. Approximate bias correction in econometrics. Journal of Econometrics, 85(2):205–230, 1998. [18] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning of conditional random fields. In Proceedings of ACL, pages 870–878, Columbus, Ohio, June 2008. [19] Keith Myerscough, Jason Frank, and Benedict Leimkuhler. Least-biased correction of extended dynamical systems using observational data. arXiv preprint arXiv:1411.6011, 2014. [20] O. Parson, S. Ghosh, M. Weal, and A. Rogers. Non-intrusive load monitoring using prior models of general appliance types. In Proceedings of AAAI, pages 356–362, July 2012. [21] David Poole and Adrian E. Raftery. Inference for deterministic simulation models: The Bayesian melding approach. Journal of the American Statistical Association, pages 1244–1255, 2000. [22] MJ Rufo, J Mart´ın, CJ P´erez, et al. Log-linear pool to combine prior distributions: A suggestion for a calibration-based approach. Bayesian Analysis, 7(2):411–438, 2012. [23] H. ˇSevˇc´ıkov´a, A. Raftery, and P. Waddell. Uncertain benefits: Application of Bayesian melding to the Alaskan way viaduct in Seattle. Transportation Research Part A: Policy and Practice, 45:540–553, 2011. [24] Robert L Wolpert. Comment on “Inference from a deterministic population dynamics model for bowhead whales”. Journal of the American Statistical Association, 90(430):426–427, 1995. [25] M. Wytock and J. Zico Kolter. Contextually supervised source separation with application to energy disaggregation. In Proceedings of AAAI, pages 486–492, 2014. [26] M. Zhong, N. Goddard, and C. Sutton. Signal aggregate constraints in additive factorial HMMs, with application to energy disaggregation. In NIPS, pages 3590–3598, 2014. [27] J.-P. Zimmermann, M. Evans, J. Griggs, N. King, L. Harding, P. Roberts, and C. Evans. Household electricity survey, 2012. 9 | 2015 | 82 |
5,975 | Regressive Virtual Metric Learning Micha¨el Perrot, and Amaury Habrard Universit´e de Lyon, Universit´e Jean Monnet de Saint-Etienne, Laboratoire Hubert Curien, CNRS, UMR5516, F-42000, Saint-Etienne, France. {michael.perrot,amaury.habrard}@univ-st-etienne.fr Abstract We are interested in supervised metric learning of Mahalanobis like distances. Existing approaches mainly focus on learning a new distance using similarity and dissimilarity constraints between examples. In this paper, instead of bringing closer examples of the same class and pushing far away examples of different classes we propose to move the examples with respect to virtual points. Hence, each example is brought closer to a a priori defined virtual point reducing the number of constraints to satisfy. We show that our approach admits a closed form solution which can be kernelized. We provide a theoretical analysis showing the consistency of the approach and establishing some links with other classical metric learning methods. Furthermore we propose an efficient solution to the difficult problem of selecting virtual points based in part on recent works in optimal transport. Lastly, we evaluate our approach on several state of the art datasets. 1 Introduction The goal of a metric learning algorithm is to capture the idiosyncrasies in the data mainly by defining a new space of representation where some semantic constraints between examples are fulfilled. In the previous years the main focus of metric learning algorithms has been to learn Mahalanobis like distances of the form dM(x, x′) = p (x −x′)T M(x −x′) where M is a positive semi-definite matrix (PSD) defining a set of parameters1. Using a Cholesky decomposition M = LLT , one can see that this is equivalent to learn a linear transformation from the input space. Most of the existing approaches in metric learning use constraints of type must-link and cannot-link between learning examples [1, 2]. For example, in a supervised classification task, the goal is to bring closer examples of the same class and to push far away examples of different classes. The idea is that the learned metric should affect a high value to dissimilar examples and a low value to similar examples. Then, this new distance can be used in a classification algorithm like a nearest neighbor classifier. Note that in this case the set of constraints is quadratic in the number of examples which can be prohibitive when the number of examples increases. One heuristic is then to select only a subset of the constraints but selecting such a subset is not trivial. In this paper, we propose to consider a new kind of constraints where each example is associated with an a priori defined virtual point. It allows us to consider the metric learning problem as a simple regression where we try to minimize the differences between learning examples and virtual points. Fig. 1 illustrates the differences between our approach and a classical metric learning approach. It can be noticed that our algorithm only uses a linear number of constraints. However defining these constraints by hand can be tedious and difficult. To overcome this problem, we present two approaches to automatically define them. The first one is based on some recent advances in the field of Optimal Transport while the second one uses a class-based representation space. 1When M = I, the identity matrix, it corresponds to the Euclidean distance. 1 (a) Classical must-link cannot-link approach. (b) Our virtual point-based regression formulation. Figure 1: Arrows denote the constraints used by each approach for one particular example in a binary classification task. The classical metric learning approach in Fig. 1(a) uses O(n2) constraints bringing closer examples of the same class and pushing far away examples of different classes. On the contrary, our approach presented in Fig. 1(b) moves the examples to the neighborhood of their corresponding virtual point, in black, using only O(n) constraints. ( Best viewed in color ) Moreover, thanks to its regression-based formulation, our approach can be easily kernelized allowing us to deal efficiently with non linear transformations which is a nice advantage in comparison to some metric learning methods. We also provide a theoretical analysis showing the consistency of our approach and establishing some relationships with a classical metric learning formulation. This paper is organized as follows. In Section 2 we identify several related works. Then in Section 3 we present our approach, provide some theoretical results and give two solutions to generate the virtual points. Section 4 is dedicated to an empirical evaluation of our method on several widely used datasets. Finally, we conclude in Section 5. 2 Related work For up-to-date surveys on metric learning see [3] and [4]. In this section we focus on algorithms which are more closely related to our approach. First of all, one of the most famous approach in metric learning is LMNN [5] where the authors propose to learn a PSD matrix to improve the k-nearest-neighbours algorithm. In their work, instead of considering pairs of examples, they use triplets (xi, xj, xk) where xj and xk are in the neighborhood of xi and such that xi and xj are of the same class and xk is of a different class. The idea is then to bring closer xi and xj while pushing xk far away. Hence, if the number of constraints seems to be cubic, the authors propose to only consider triplets of examples which are already close to each other. In contrast, the idea presented in [6] is to collapse all the examples of the same class in a single point and to push infinitely far away examples of different classes. The authors define a measure to estimate the probability of having an example xj given an example xi with respect to a learned PSD matrix M. Then, they minimize, w.r.t. M, the KL divergence between this measure and the best case where the probability is 1 if the two examples are of the same class and 0 otherwise. It can be seen as collapsing all the examples of the same class on an implicit virtual point. In this paper we use several explicit virtual points and we collapse the examples on these points with respect to their classes and their distances to them. A recurring issue in Mahalanobis like metric learning is to fulfill the PSD constraint on the learned metric. Indeed, projecting a matrix on the PSD cone is not trivial and generally requires a costly eigenvalues decomposition. To address this problem, in ITML [1] the authors propose to use a LogDet divergence as the regularization term. The idea is to learn a matrix which is close to an a priori defined PSD matrix. The authors then show that if the divergence is finite, then the learned matrix is guaranteed to be PSD. Another approach, as proposed in [2], is to learn a matrix L such that M = LLT , i.e. instead of learning the metric the authors propose to learn the projection. The main drawback is the fact that most of the time the resulting optimization problem is not convex [3, 4, 7] and is thus harder to optimize. In this paper, we are also interested in learning L directly. However, because we are using constraints between examples and virtual points, we obtain a convex problem with a closed form solution allowing us to learn the metric in an efficient way. The problem of learning a metric such that the induced space is not linearly dependent of the input space has been addressed in several works before. First, it is possible to directly learn an intrinsically non linear metric as in χ2-LMNN [8] where the authors propose to learn a χ2 distance rather than a Mahalanobis distance. This distance is particularly relevant for histograms comparisons. Note that this kind of approaches is close to the kernel learning problem which is beyond the scope of this work. Second, another solution used by local metric learning methods is to split the input space 2 in several regions and to learn a metric in each region to introduce some non linearity, as in MMLMNN [7]. Similarly, in GB-LMNN [8] the authors propose to locally refine the metric learned by LMNN by successively splitting the input space. A third kind of approach tries to project the learning examples in a new space which is non linearly dependent of the input space. It can be done in two ways, either by projecting a priori the learning examples in a new space with a KPCA [9] or by rewriting the optimization problem in a kernelized form [1]. The first approach allows one to include non linearity in most of the metric learning algorithms but imposes to select the interesting features beforehand. The second method can be difficult to use as rewriting the optimization problem is most of the times non trivial [4]. Indeed, if one wants to use the kernel trick it implies that the access to the learning examples should only be done through dot products which is difficult when working with pairs of examples as it is the case in metric learning. In this paper we show that using virtual points chosen in a given target space allows us to kernelize our approach easily and thus to work in a very high dimensional space without using an explicit projection thanks to the kernel trick. Our method is based on a regression and can thus be linked, in its kernelized form, to several approaches in kernelized regression for structured output [10, 11, 12]. The idea behind these approaches is to minimize the difference between input examples and output examples using kernels, i.e. working in a high dimensional space. In our case, the learning examples can be seen as input examples and the virtual points as output examples. However, we only project the learning examples in a high dimensional space, the virtual points already belong to the output space. Hence, we do not have the pre-image problem [12]. Furthermore, our goal is not to predict a virtual point but to learn a metric between examples and thus, after the learning step, the virtual points are discarded. 3 Contributions The main idea behind our algorithm is to bring closer the learning examples to a set of virtual points. We present this idea in three subsections. First we assume that we have access to a set of n learning pairs (x,v) where x is a learning example and v is a virtual point associated to x and we present both the linear and kernelized formulations of our approach called RVML. It boils down to solve a regression in closed form, the main originality being the introduction of virtual points. In the second subsection, we show that it is possible to theoretically link our approach to a classical metric learning one based on [13]. In the last subsection, we propose two automatic methods to generate the virtual points and to associate them with the learning examples. 3.1 Regressive Virtual Metric Learning (RVML) Given a probability distribution D defined over X × Y where X ⊆Rd and Y is a finite label set, let S = {(xi, yi)}n i=1 be a set of examples drawn i.i.d. from D. Let fv : X × Y →V where V ⊆Rd′ be the function which associates each example to a virtual point. We consider the learning set Sv = {(xi, vi)}n i=1 where vi = fv(xi, yi). For the sake of simplicity denote by X = (x1, . . . , xn)T and V = (v1, . . . , vn)T the matrices containing respectively one example and the associated virtual point on each line. In this section, we consider that the function fv is known. We come back to its definition in Section 3.3. Let ∥· ∥F be the Frobenius norm and ∥· ∥2 be the l2 vector norm. Our goal is to learn a matrix L such that M = LLT and for this purpose we consider the following optimisation problem: min L f(L, X, V) = min L 1 n∥XL −V∥2 F + λ∥L∥2 F. (1) The idea is to learn a new space of representation where each example is close to its associated virtual point. Note that L is a d × d′ matrix and if d′ < d we also perform dimensionality reduction. Theorem 1. The optimal solution of Problem 1 can be found in closed form. Furthermore, we can derive two equivalent solutions: L = XT X + λnI −1 XT V (2) L = XT XXT + λnI −1 V. (3) Proof. The proof of this theorem can be found in the supplementary material. 3 From Eq. 2 we deduce the matrix M: M = LLT = XT X + λnI −1 XT VVT X XT X + λnI −1 . (4) Note that M is PSD by construction: xT Mx = xT LLT x = ∥LT x∥2 2 ≥0. So far, we have focused on the linear setting. We now present a kernelized version, showing that it is possible to learn a metric in a very high dimensional space without an explicit projection. Let φ(x) be a projection function and K(x, x′) = φ(x)T φ(x′) be its associated kernel. For the sake of readability, let KX = φ(X)φ(X)T where φ(X) = (φ(x1), . . . , φ(xn))T . Given the solution matrix L presented in Eq. 3, we have M = XT XXT + λnI −1 VVT XXT + λnI −1 X. Then, MK the kernelized version of the matrix M is defined such that: MK = φ(X)T (KX + λnI)−1 VVT (KX + λnI)−1 φ(X). The squared Mahalanobis distance can be written as d2 M(x, x′) = xT Mx + x′T Mx′ −2xT Mx′. Thus we can obtain d2 MK(φ(x), φ(x′)) = φ(x)T MKφ(x) + φ(x′)T MKφ(x′) −2φ(x)T MKφ(x′) the kernelized version by considering that: φ(x)T MKφ(x) = φ(x)T φ(X)T (KX + λnI)−1 VVT (KX + λnI)−1 φ(X)φ(x) = KX(x)T (KX + λnI)−1 VVT (KX + λnI)−1 KX(x) where KX(x) = (K(x, x1), . . . , K(x, xn))T is the similarity vector to the examples w.r.t. K. Note that it is also possible to obtain a kernelized version of L: LK = φ(X)T (KX + λnI)−1 V. This result is close to a previous one already derived in [11] in a structured output setting. The main difference is the fact that we do not use a kernel on the output (the virtual points here). Hence, it is possible to compute the projection of an example x of dimension d in a new space of dimension d′: φ(x)LK = φ(x)T φ(X)T (KX + λnI)−1 V = KX(x)T (KX + λnI)−1 V. Recall that in this work we are interested in learning a distance between examples and not in the prediction of the virtual points which only serve as a way to bring closer similar examples and push far away dissimilar examples. From a complexity standpoint, we can see that, assuming the kernel function as easy to calculate, the main bottleneck when computing the solution in closed form is the inversion of a n × n matrix. 3.2 Theoretical Analysis In this section, we propose to theoretically show the interest of our approach by proving (i) that it is consistent and (ii) that it is possible to link it to a more classical metric learning formulation. 3.2.1 Consistency Let l(L, (x, v)) = ∥xT L −vT ∥2 2 be our loss and let Dv be the probability distribution over X × V such that pDv(x, v) = pD(x, y|v = fv(x, y)). Showing the consistency boils down to bound with high probability the true risk, denoted by R(L), by the empirical risk, denoted by ˆR(L) such that: R(L) = E(x,v)∼Dvl(L, (x, v)) and ˆR(L) = 1 n X (x,v)∈Sv l(L, (x, v)) = 1 n∥XL −V∥2 F. The empirical risk corresponds to the error of the learned matrix L on the learning set Sv. The true risk is the error of L on the unknown distribution Dv. The consistency property ensures that with a sufficient number of examples a low empirical risk implies a low true risk with high probability. To show that our approach is consistent, we use the uniform stability framework [14]. Theorem 2. Let ∥v∥2 ≤Cv for any v ∈V and ∥x∥2 ≤Cx for any x ∈X. With probability 1 −δ, for any matrix L optimal solution of Problem 1, we have: R(L) ≤ˆR(L) + 8C2 vC2 x λn 1 + Cx √ λ 2 + 16C2 x λ + 1 C2 v 1 + Cx √ λ 2! s ln 1 δ 2n . 4 Proof. The proof of this theorem can be found in the supplementary material. We obtain a rate of convergence in O 1 √n which is standard with this kind of bounds. 3.2.2 Link with a Classical Metric Learning Formulation In this section we show that it is possible to bound the true risk of a classical metric learning approach with the empirical risk of our formulation. Most of the classical metric learning approaches make use of a notion of margin between similar and dissimilar examples. Hence, similar examples have to be close to each other, i.e. at a distance smaller than a margin γ1, and dissimilar examples have to be far from each other, i.e. at a distance greater than a margin γ−1. Let (xi, yi) and (xj, yj) be two examples from X × Y, using this notion of margin, we consider the following loss [13]: l(L, (xi, yi), (xj, yj)) = h yij(d2(LT xi, LT xj) −γyij) i + (5) where yij = 1 if yi = yj and −1 otherwise, [z]+ = max(0, z) is the hinge loss and γyij is the desired margin between examples. As introduced before, we consider that γyij takes a big value when the examples are dissimilar, i.e. when yij = −1, and a small value when the examples are similar, i.e. when yij = 1. In the following we show that, relating the notion of margin to the distances between virtual points, it is possible to bound the true risk associated with this loss by the empirical risk of our approach with respect to a constant. Theorem 3. Let D be a distribution over X × Y. Let V ⊂Rd′ be a finite set of virtual points and fv is defined as fv(xi, yi) = vi, vi ∈V. Let ∥v∥2 ≤Cv for any v ∈V and ∥x∥2 ≤Cx for any x ∈X. Let γ1 = 2 maxxk,xl,ykl=1 d2(vk, vl) and γ−1 = 1 2 minxk,xl,ykl=−1 d2(vk, vl), we have: E(xi,yi)∼D,(xj,yj)∼D yij(d2(LT xi, LT xj) −γyij) + ≤8 ˆR(L) + 8C2 vC2 x λn 1 + Cx √ λ 2 + 16C2 x λ + 1 C2 v 1 + Cx √ λ 2! s ln 1 δ 2n ) . Proof. The proof of this theorem can be found in the supplementary material. In Theorem 3, we can notice that the margins are related to the distances between virtual points and correspond to the ideal margins, i.e. the margins that we would like to achieve after the learning step. Aside this remark, we can define ˆγ1 and ˆγ−1 the observed margins obtained after the learning step: All the similar examples are in a sphere centered in their corresponding virtual point and of diameter ˆγ1 = 2 max(x,v)
xT L −vT
2. Similarly, the distance between hyperspheres of dissimilar examples is ˆγ−1 = minv,v′,v̸=v′ ∥v −v′∥2 −ˆγ1. As a consequence, even if we do not use cannot-link constraints our algorithm is able to push reasonably far away dissimilar examples. In the next subsection we present two different methods to select the virtual points. 3.3 Virtual Points Selection Previously, we assumed to have access to the function fv : X × Y →V. In this subsection, we present two methods for generating automatically the set of virtual points and the mapping fv. 3.3.1 Using Optimal Transport on the Learning Set In this first approach, we propose to generate the virtual points by using a recent variation of the Optimal Transport (OT) problem [15] allowing one to transport some examples to new points corresponding to a linear combination of a set of known instances. These new points will actually correspond to our virtual points. Our approach works as follows. We begin by extracting a set of landmarks S′ from the training set S. For this purpose, we use an adaptation of the landmark selection method proposed in [16] allowing us to take into account some diversity among the landmarks. To avoid to fix the number of landmarks in advance, we have just replaced it by a simple heuristic saying that the number of landmarks must be greater than the number of classes and that the maximum distance between an example and a landmark must be lower than the mean of all pairwise 5 Algorithm 1: Selecting S′ from a set of examples S. input : S = {(xi, yi)}n i=1 a set of examples; Y the label set. output: S′ a subset of S begin µ = mean of distances between all the examples of S xmax = arg max x∈S ∥x −0∥2 S′ = {xmax}; S = S \ S′ ε = maxx∈S minx′∈S′ ∥x −x′∥2 while |S′| < |Y| or ε > µ do xmax = arg max x∈S X x′∈S′ ∥x −x′∥2 S′ = S′ ∪{xmax}; S = S \ S′ ε = maxx∈S minx′∈S′ ∥x −x′∥2 distances from the training set -allowing us to have a fully automatic procedure. It is summarized in Algorithm 1. Then we compute an optimal transport from the training set S to the landmark set S′. For this purpose, we create a real matrix C of size |S|×|S′| giving the cost to transport one training instance to a landmark such that C(i, j) = ∥xi −x′ j∥2 with xi ∈S and x′ j ∈S′. The optimal transport is found by learning a matrix γ ∈R|S|×|S′| able to minimize the cost of moving training examples to the landmark points. Let S′ be the matrix of landmark points (one per line), the transport w.r.t. γ of any training instance (xi, yi) gives a new virtual point such that fv(xi, yi) = γ(i)S′, γ(i) designing the ith line of γ. Note that this new virtual point is a linear combination of the landmark instances to which the example is transported. The set of virtual points is then defined by V = γS′. The virtual points are thus not defined a priori but are automatically learned by solving a problem of optimal transport. Note that this transportation mode is potentially non linear since there is no guarantee that there exists a matrix T such that V = XT. Our metric learning approach can, in this case, be seen as a an approximation of the result given by the optimal transport. To learn γ, we use the following optimization problem proposed in [17]: arg min γ ⟨γ, C⟩F −1 λh(γ) + η X j X c ∥γ(yi = c, j)∥p q where h(γ) = −P i,j γ(i, j) log(γ(i, j)) is the entropy of γ that allows to solve the transportation problem efficiently with the Sinkhorn-Knopp algorithm [18]. The second regularization term, where γ(yi = c, j) corresponds to the lines of the jth column of γ where the class of the input is c, has been introduced in [17]. The goal of this term is to prevent input examples of different classes to move toward the same output examples by promoting group sparsity in the matrix γ thanks to ∥· ∥p q corresponding to a lq-norm to the power of p used here with q = 1 and p = 1 2. 3.3.2 Using a Class-based Representation Space For this second approach, we propose to define virtual points as the unit vectors of a space of dimension |Y|. Let ej ∈R|Y| be such a unit vector (1 ≤j ≤|Y|) -i.e. a vector where all the attributes are 0 except for one attribute j which is set to 1- to which we associate a class label from Y. Then, for any learning example (xi, yi), we define fv(xi, yi) = e#yi where #yi = j if ej is mapped with the class yi. Thus, we have exactly |Y| virtual points, each one corresponding to a unit vector and a class label. We call this approach the class-based representation space method. If the number of classes is smaller than the number of dimensions used to represent the learning examples, then our method will perform dimensionality reduction for free. Furthermore, our approach will try to project all the examples of one class on the same axis while examples of other classes will tend to be projected on different axes. The underlying intuition behind the new space defined by L is to make each attribute discriminant for one class. 6 Table 1: Comparison of our approach with several baselines in the linear setting. Baselines Our approach Base 1NN LMNN SCML RVML-Lin-OT RVML-Lin-Class Amazon 41.51 ± 3.24 65.50 ± 2.28 71.68 ± 1.86 71.62 ± 1.34 73.09 ± 2.49 Breast 95.49 ± 0.79 95.49 ± 0.89 96.50 ± 0.64* 95.24 ± 1.21 95.34 ± 0.95 Caltech 18.04 ± 2.20 49.68 ± 2.76 52.84 ± 1.61 52.51 ± 2.41 55.41 ± 2.55* DSLR 29.61 ± 4.38 76.08 ± 4.79 65.10 ± 9.00 74.71 ± 5.27 75.29 ± 5.08 Ionosphere 86.23 ± 1.95 88.02 ± 3.02 90.38 ± 2.55* 87.36 ± 3.12 82.74 ± 2.81 Isolet 88.97 95.83 89.61 91.40 94.61 Letters 94.74 ± 0.27 96.43 ± 0.28* 96.13 ± 0.20 90.25 ± 0.60 95.51 ± 0.26 Pima 69.91 ± 1.69 70.04 ± 2.20 69.22 ± 2.60 70.48 ± 3.19 69.57 ± 2.85 Scale 78.68 ± 2.66 78.20 ± 1.91 93.39 ± 1.70* 90.05 ± 2.13 87.94 ± 1.99 Splice 71.17 82.02 85.43 84.64 78.44 Svmguide1 95.12 95.03 87.38 94.83 85.25 Wine 96.18 ± 1.59 98.36 ± 1.03 96.91 ± 1.93 98.55 ± 1.67 98.18 ± 1.48 Webcam 42.90 ± 4.19 85.81 ± 3.75 90.43 ± 2.70 88.60 ± 3.63 88.60 ± 2.69 mean 69.89 82.81 83.46 83.86 83.07 4 Experimental results In this section, we evaluate our approach on 13 different datasets coming from either the UCI [19] repository or used in recent works in metric learning [8, 20, 21]. For isolet, splice and svmguide1 we have access to a standard training/test partition, for the other datasets we use a 70% training/30% test partition, we perform the experiments on 10 different splits and we average the result. We normalize the examples with respect to the training set by subtracting for each attribute its mean and dividing by 3 times its standard deviation. We set our regularization parameter λ with a 5-fold cross validation. After the metric learning step, we use a 1-nearest neighbor classifier to assess the performance of the metric and report the accuracy obtained. We perform two series of experiments. First, we consider our linear formulation used with the two virtual points selection methods presented in this paper: RVML-Lin-OT based on Optimal Transport (Section 3.3.1) and RVML-Lin-Class using the class-based representation space method (Section 3.3.2). We compare them to a 1-nearest neighbor classifier without metric learning (1NN), and with two state of the art linear metric learning methods: LMNN [5] and SCML [20]. In a second series, we consider the kernelized versions of RVML, namely RVML-RBF-OT and RVML-RBF-Class, based respectively on Optimal Transport and class-based representation space methods, with a RBF kernel with the parameter σ fixed as the mean of all pairwise training set Euclidean distances [16]. We compare them to non linear methods using a KPCA with a RBF kernel2 as a pre-process: 1NN-KPCA a 1-nearest neighbor classifier without metric learning and LMNNKPCA corresponding to LMNN in the KPCA-space. The number of dimensions is fixed as the one of the original space for high dimensional datasets (more than 100 attributes), to 3 times the original dimension when the dimension is smaller (between 5 and 100 attributes) and to 4 times the original dimension for the lowest dimensional datasets (less than 5 attributes). We also consider some local metric learning methods: GBLMNN [8] a non linear version of LMNN and SCMLLOCAL [20] the local version of SCML. For all these methods, we use the implementations available online letting them handle hyper-parameters tuning. The results for linear methods are presented in Table 1 while Table 2 gives the results obtained with the non linear approaches. In each table, the best result on each line is highlighted with a bold font while the second best result is underlined. A star indicates either that the best baseline is significantly better than our best result or that our best result is significantly better than the best baseline according to classical significance tests (the p-value being fixed at 0.05). We can make the following remarks. In the linear setting, our approaches are very competitive to the state of the art and RVML-Lin-OT tends to be the best on average even though it must be noticed that SCML is very competitive on some datasets (the average difference is not significant). RVML-LinClass performs slightly less on average. Considering now the non linear methods, our approaches improve their performance and are significantly better than the others on average, RVML-RBF-Class has the best average behavior in this setting. These experiments show that our regressive formulation 2With the σ parameter fixed as previously to the mean of all pairwise training set Euclidean distances. 7 Table 2: Comparison of our approach with several baselines in the non-linear case. Baselines Our approach Base 1NN-KPCA LMNN-KPCA GBLMNN SCMLLOCAL RVML-RBF-OT RVML-RBF-Class Amazon 20.27 ± 2.42 53.16 ± 3.73 65.53 ± 2.32 69.14 ± 1.74 73.51 ± 0.83 76.22 ± 2.09* Breast 92.43 ± 2.19 95.39 ± 1.32 95.58 ± 0.87 96.31 ± 0.66 95.73 ± 0.97 95.78 ± 0.92 Caltech 20.82 ± 8.29 29.88 ± 10.89 49.91 ± 2.80 50.56 ± 1.62 54.39 ± 1.89 57.98 ± 2.22* DSLR 64.90 ± 5.81 73.92 ± 7.57 76.08 ± 4.79 62.55 ± 6.94 70.39 ± 4.48 76.67 ± 4.57 Ionosphere 75.57 ± 2.79 85.66 ± 2.55 87.36 ± 3.02 90.94 ± 3.02 90.66 ± 3.10 93.11 ± 3.30* Isolet 68.70 96.28 96.02 91.40 95.96 96.73 Letter 95.39 ± 0.27 97.17* ± 0.18 96.51 ± 0.25 96.63 ± 0.26 91.26 ± 0.50 96.09 ± 0.21 Pima 69.57 ± 2.64 69.48 ± 2.04 69.52 ± 2.27 68.40 ± 2.75 69.35 ± 2.95 70.74 ± 2.36 Scale 78.36 ± 0.88 88.10 ± 2.26 77.88 ± 2.43 93.86 ± 1.78 95.19 ± 1.46* 94.07 ± 2.02 Splice 66.99 88.97 82.21 87.13 88.51 88.32 Svmguide1 95.72 95.60 95.00 87.40 95.67 95.05 Wine 92.18 ± 1.23 95.82 ± 2.98 98.00 ± 1.34 96.55 ± 2.00 98.91 ± 1.53 98.00 ± 1.81 Webcam 73.55 ± 4.57 84.52 ± 3.83 85.81 ± 3.75 88.71 ± 2.83 88.71 ± 4.28 88.92 ± 2.91 mean 70.34 81.07 82.72 83.04 85.25 86.74 is very competitive and is even able to improve state of the art performances in a non linear setting and consequently that our virtual point selection methods automatically select correct instances. Considering the virtual point selection, we can observe that the OT formulation performs better than the class-based representation space one in the linear case, while it is the opposite in the non-linear case. We think that this can be explained by the fact that the OT approach generates more virtual points in a potentially non linear way which brings more expressiveness for the linear case. On the other hand, in the non linear one, the relative small number of virtual points used by the class-based method seems to induce a better regularization. In Section 4 of the supplementary material, we provide additional experiments showing the interest of using explicit virtual points and the need of a careful association between examples and virtual points. We also provide some graphics showing 2D projections of the space learned by RVML-Lin-Class and RVML-RBF-Class on the Isolet dataset illustrating the capability of these approaches to learn discriminative attributes. In terms of computational cost, our approach -implemented in closed form [22]- is competitive with classical methods but does not yield to significant improvements. Indeed, in practice, classical approaches only consider a small number of constraints e.g. c times the number of examples, where c is a small constant, in the case of SCML. Thus, the practical computational complexity of both our approach and classical methods is linearly dependant on the number of examples. 5 Conclusion We present a new metric learning approach based on a regression and aiming at bringing closer the learning examples to some a priori defined virtual points. The number of constraints has the advantage to grow linearly with the size of the learning set in opposition to the quadratic grow of standard must-link cannot-link approaches. Moreover, our method can be solved in closed form and can be easily kernelized allowing us to deal with non linear problems. Additionally, we propose two methods to define the virtual points: One making use of recent advances in the field of optimal transport and one based on unit vectors of a class-based representation space allowing one to perform directly some dimensionality reduction. Theoretically, we show that our approach is consistent and we are able to link our empirical risk to the true risk of a classical metric learning formulation. Finally, we empirically show that our approach is competitive with the state of the art in the linear case and outperforms some classical approaches in the non-linear one. We think that this work opens the door to design new metric learning formulations, in particular the definition of the virtual points can bring a way to control some particular properties of the metric (rank, locality, discriminative power, ...). As a consequence, this aspect opens new issues which are in part related to landmark selection problems but also to the ability to embed expressive semantic constraints to satisfy by means of the virtual points. Other perspectives include the development of a specific solver, of online versions, the use of low rank-inducing norms or the conception of new local metric learning methods. Another direction would be to study similarity learning extensions to perform linear classification such as in [21, 23]. 8 References [1] Jason V. Davis, Brian Kulis, Prateek Jain, Suvrit Sra, and Inderjit S. Dhillon. Informationtheoretic metric learning. In Proc. of ICML, pages 209–216, 2007. [2] Jacob Goldberger, Sam T. Roweis, Geoffrey E. Hinton, and Ruslan Salakhutdinov. Neighbourhood components analysis. In Proc. of NIPS, pages 513–520, 2004. [3] Aur´elien Bellet, Amaury Habrard, and Marc Sebban. Metric Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2015. [4] Brian Kulis. Metric learning: A survey. Foundations and Trends in Machine Learning, 5(4):287–364, 2013. [5] Kilian Q. Weinberger, John Blitzer, and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. In Proc. of NIPS, pages 1473–1480, 2005. [6] Amir Globerson and Sam T. Roweis. Metric learning by collapsing classes. In Proc. of NIPS, pages 451–458, 2005. [7] Kilian Q. Weinberger and Lawrence K. Saul. Distance metric learning for large margin nearest neighbor classification. JMLR, 10:207–244, 2009. [8] Dor Kedem, Stephen Tyree, Kilian Q. Weinberger, Fei Sha, and Gert R. G. Lanckriet. Nonlinear metric learning. In Proc. of NIPS, pages 2582–2590, 2012. [9] Bernhard Sch¨olkopf, Alex J. Smola, and Klaus-Robert M¨uller. Kernel principal component analysis. In Proc. of ICANN, pages 583–588, 1997. [10] Jason Weston, Olivier Chapelle, Andr´e Elisseeff, Bernhard Sch¨olkopf, and Vladimir Vapnik. Kernel dependency estimation. In Proc. of NIPS, pages 873–880, 2002. [11] Corinna Cortes, Mehryar Mohri, and Jason Weston. A general regression technique for learning transductions. In Proc. of ICML, pages 153–160, 2005. [12] Hachem Kadri, Mohammad Ghavamzadeh, and Philippe Preux. A generalized kernel approach to structured output learning. In Proc. of ICML, pages 471–479, 2013. [13] Rong Jin, Shijun Wang, and Yang Zhou. Regularized distance metric learning: Theory and algorithm. In Proc. of NIPS, pages 862–870, 2009. [14] Olivier Bousquet and Andr´e Elisseeff. Stability and generalization. JMLR, 2:499–526, 2002. [15] C´edric Villani. Optimal transport: old and new, volume 338. Springer Science & Business Media, 2008. [16] Purushottam Kar and Prateek Jain. Similarity-based learning via data driven embeddings. In Proc. of NIPS, pages 1998–2006, 2011. [17] Nicolas Courty, R´emi Flamary, and Devis Tuia. Domain adaptation with regularized optimal transport. In Proc. of ECML/PKDD, pages 274–289, 2014. [18] Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Proc. of NIPS, pages 2292–2300, 2013. [19] M. Lichman. UCI machine learning repository, 2013. [20] Yuan Shi, Aur´elien Bellet, and Fei Sha. Sparse compositional metric learning. In Proc. of AAAI Conference on Artificial Intelligence, pages 2078–2084, 2014. [21] Aur´elien Bellet, Amaury Habrard, and Marc Sebban. Similarity learning for provably accurate sparse linear classification. In Proc. of ICML, 2012. [22] The closed-form implementation of RVML is freely available on the authors’ website. [23] Maria-Florina Balcan, Avrim Blum, and Nathan Srebro. Improved guarantees for learning via similarity functions. In Proc. of COLT, pages 287–298, 2008. 9 | 2015 | 83 |
5,976 | Halting in Random Walk Kernels Mahito Sugiyama ISIR, Osaka University, Japan JST, PRESTO mahito@ar.sanken.osaka-u.ac.jp Karsten M. Borgwardt D-BSSE, ETH Z¨urich Basel, Switzerland karsten.borgwardt@bsse.ethz.ch Abstract Random walk kernels measure graph similarity by counting matching walks in two graphs. In their most popular form of geometric random walk kernels, longer walks of length k are downweighted by a factor of λk (λ < 1) to ensure convergence of the corresponding geometric series. We know from the field of link prediction that this downweighting often leads to a phenomenon referred to as halting: Longer walks are downweighted so much that the similarity score is completely dominated by the comparison of walks of length 1. This is a na¨ıve kernel between edges and vertices. We theoretically show that halting may occur in geometric random walk kernels. We also empirically quantify its impact in simulated datasets and popular graph classification benchmark datasets. Our findings promise to be instrumental in future graph kernel development and applications of random walk kernels. 1 Introduction Over the last decade, graph kernels have become a popular approach to graph comparison [4, 5, 7, 9, 12, 13, 14], which is at the heart of many machine learning applications in bioinformatics, imaging, and social-network analysis. The first and best-studied instance of this family of kernels are random walk kernels, which count matching walks in two graphs [5, 7] to quantify their similarity. In particular, the geometric random walk kernel [5] is often used in applications as a baseline comparison method on graph benchmark datasets when developing new graph kernels. These geometric random walk kernels assign a weight λk to walks of length k, where λ < 1 is set to be small enough to ensure convergence of the corresponding geometric series. Related similarity measures have also been employed in link prediction [6, 10] as a similarity score between vertices [8]. However, there is one caveat regarding these approaches. Walk-based similarity scores with exponentially decaying weights tend to suffer from a problem referred to as halting [1]. They may downweight walks of lengths 2 and more, so much so that the similarity score is ultimately completely dominated by walks of length 1. In other words, they are almost identical to a simple comparison of edges and vertices, which ignores any topological information in the graph beyond single edges. Such a simple similarity measure could be computed more efficiently outside the random walk framework. Therefore, halting may affect both the expressivity and efficiency of these similarity scores. Halting has been conjectured to occur in random walk kernels [1], but its existence in graph kernels has never been theoretically proven or empirically demonstrated. Our goal in this study is to answer the open question if and when halting occurs in random walk graph kernels. We theoretically show that halting may occur in graph kernels and that its extent depends on properties of the graphs being compared (Section 2). We empirically demonstrate in which simulated datasets and popular graph classification benchmark datasets halting is a concern (Section 3). We conclude by summarizing when halting occurs in practice and how it can be avoided (Section 4). 1 We believe that our findings will be instrumental in future applications of random walk kernels and the development of novel graph kernels. 2 Theoretical Analysis of Halting We theoretically analyze the phenomenon of halting in random walk graph kernels. First, we review the definition of graph kernels in Section 2.1. We then present our key theoretical result regarding halting in Section 2.2 and clarify the connection to linear kernels on vertex and edge label histograms in Section 2.3. 2.1 Random Walk Kernels Let G = (V, E, φ) be a labeled graph, where V is the vertex set, E is the edge set, and φ is a mapping φ : V ∪E →Σ with the range Σ of vertex and edge labels. For an edge (u, v) ∈E, we identify (u, v) and (v, u) if G is undirected. The degree of a vertex v ∈V is denoted by d(v). The direct (tensor) product G× = (V×, E×, φ×) of two graphs G = (V, E, φ) and G′ = (V ′, E′, φ′) is defined as follows [1, 5, 14]: V× = { (v, v′) ∈V × V ′ | φ(v) = φ′(v′) }, E× = { ((u, u′), (v, v′)) ∈V× × V× | (u, v) ∈E, (u′, v′) ∈E′, and φ(u, v) = φ′(u′, v′) }, and all labels are inherited, or φ×((v, v′)) = φ(v) = φ′(v′) and φ×((u, u′), (v, v′)) = φ(u, v) = φ′(u′, v′). We denote by A× the adjacency matrix of G× and denote by δ× and ∆× the minimum and maximum degrees of G×, respectively. To measure the similarity between graphs G and G′, random walk kernels count all pairs of matching walks on G and G′ [2, 5, 7, 11]. If we assume a uniform distribution for the starting and stopping probabilities over the vertices of G and G′, the number of matching walks is obtained through the adjacency matrix A× of the product graph G× [14]. For each k ∈N, the k-step random walk kernel between two graphs G and G′ is defined as: Kk ×(G, G′) = |V×| ∑ i,j=1 [ k ∑ l=0 λlAl × ] ij with a sequence of positive, real-valued weights λ0, λ1, λ2, . . . , λk assuming that A0 × = I, the identity matrix. Its limit K∞ × (G, G′) is simply called the random walk kernel. Interestingly, K∞ × can be directly computed if weights are the geometric series, or λl = λl, resulting in the geometric random walk kernel: KGR(G, G′) = |V×| ∑ i,j=1 [ ∞ ∑ l=0 λlAl × ] ij = |V×| ∑ i,j=1 [ (I −λA×)−1] ij . In the above equation, let (I −λA×)x = 0 for some value of x. Then, λA×x = x and (λA×)lx = x for any l ∈N. If (λA×)l converges to 0 as l →∞, (I −λA×) is invertible since x becomes 0. Therefore, (I −λA×)−1 = ∑∞ l=0 λlAl × from the equation (I −λA×)(I + λA× + λ2A2 × + . . . ) = I [5]. It is well-known that the geometric series of matrices, often called the Neumann series, I + λA× + (λA×)2 + · · · converges only if the maximum eigenvalue of A×, denoted by µ×,max, is strictly smaller than 1/λ. Therefore, the geometric random walk kernel KGR is well-defined only if λ < 1/µ×,max. There is a relationship for the minimum and maximum degrees δ× and ∆× of G× [3]: δ× ≤ d× ≤µ×,max ≤∆×, where d× is the average of the vertex degrees of G×, or d× = (1/|V×|) ∑ v∈V× d(v). In practice, it is sufficient to set the parameter λ < 1/∆×. In the inductive learning setting, since we do not know a priori target graphs that a learner will receive in the future, λ should be small enough so λ < 1/µ×,max for any pair of unseen graphs. Otherwise, we need to re-compute the full kernel matrix and re-train the learner. In the transductive 2 setting, we are given a collection G of graphs beforehand. We can explicitly compute the upper bound of λ, which is (maxG,G′∈G µ×,max)−1 with the maximum of the maximum eigenvalues over all pairs of graphs G, G′ ∈G. 2.2 Halting The geometric random walk kernel KGR is one of the most popular graph kernels, as it can take walks of any length into account [5, 14]. However, the fact that it weights walks of length k by the kth power of λ, together with the condition that λ < (µ×,max)−1 < 1, immediately tells us that the contribution of longer walks is significantly lowered in KGR. If the contribution of walks of length 2 and more to the kernel value is even completely dominated by the contribution of walks of length 1, we would speak of halting. It is as if the random walks halt after one step. Here, we analyze under which conditions this halting phenomenon may occur in geometric random walk kernels. We obtain the following key theoretical statement by comparing KGR to the one-step random walk kernel K1 ×. Theorem 1 Let λ0 = 1 and λ1 = λ in the random walk kernel. For a pair of graphs G and G′, K1 ×(G, G′) ≤KGR(G, G′) ≤K1 ×(G, G′) + ε, where ε = |V×| (λ∆×)2 1 −λ∆× , and ε monotonically converges to 0 as λ →0. Proof. Let d(v) be the degree of a vertex v in G× and N(v) be the set of neighboring vertices of v, that is, N(v) = {u ∈V× | (u, v) ∈E×}. Since A× is the adjacency matrix of G×, the following relationships hold: |V×| ∑ i,j=1 [A×]ij = ∑ v∈V× d(v) ≤|V×|∆×, |V×| ∑ i,j=1 [A2 ×]ij = ∑ v∈V× ∑ v′∈N(v) d(v′) ≤|V×|∆2 ×, |V×| ∑ i,j=1 [A3 ×]ij = ∑ v∈V× ∑ v′∈N(v) ∑ v′′∈N(v′) d(v′′) ≤|V×|∆3 × , . . . , |V×| ∑ i,j=1 [An ×]ij ≤|V×|∆n ×. From the assumption that λ∆× < 1, we have KGR(G, G′) = |V×| ∑ i,j=1 [I + λA× + λ2A2 × + . . . ]ij = K1 ×(G, G′) + |V×| ∑ i,j=1 [λ2A2 × + λ3A3 × + . . . ]ij ≤K1 ×(G, G′) + |V×|λ2∆2 ×(1 + λ∆× + λ2∆2 × + . . . ) = K1 ×(G, G′) + ε. It is clear that ε monotonically goes to 0 when λ →0. ■ Moreover, we can normalize ε by dividing KGR(G, G′) by K1 ×(G, G′). Corollary 1 Let λ0 = 1 and λ1 = λ in the random walk kernel. For a pair of graphs G and G′, 1 ≤KGR(G, G′) K1 ×(G, G′) ≤1 + ε′, where ε′ = (λ∆×)2 (1 −λ∆×)(1 + λd×) and d× is the average of vertex degrees of G×. Proof. Since we have K1 ×(G, G′) = |V×| + λ ∑ v∈V× d(v) = |V×|(1 + λd×), it follows that ε/K1 ×(G, G′) = ε′. ■ Theorem 1 can be easily generalized to any k-step random walk kernel Kk ×. 3 Corollary 2 Let ε(k) = |V×|(λ∆×)k/(1 −λ∆×). For a pair of graphs G and G′, we have Kk ×(G, G′) ≤KGR(G, G′) ≤Kk ×(G, G′) + ε(k + 1). Our results imply that, in the geometric random walk kernel KGR, the contribution of walks of length longer than 2 diminishes for very small choices of λ. This can easily happen in real-world graph data, as λ is upper-bounded by the inverse of the maximum degree of the product graph. 2.3 Relationships to Linear Kernels on Label Histograms Next, we clarify the relationship between KGR and basic linear kernels on vertex and edge label histograms. We show that halting KGR leads to the convergence of it to such linear kernels. Given a pair of graphs G and G′, let us introduce two linear kernels on vertex and edge histograms. Assume that the range of labels Σ = {1, 2, . . . , s} without loss of generality. The vertex label histogram of a graph G = (V, E, φ) is a vector f = (f1, f2, . . . , fs), such that fi = |{v ∈V | φ(v) = i}| for each i ∈Σ. Let f and f ′ be the vertex label histograms of graphs G and G′, respectively. The vertex label histogram kernel KVH(G, G′) is then defined as the linear kernel between f and f ′: KVH(G, G′) = ⟨f, f ′⟩= ∑s i=1 fif ′ i. Similarly, the edge label histogram is a vector g = (g1, g2, . . . , gs), such that gi = |{(u, v) ∈E | φ(u, v) = i}| for each i ∈Σ. The edge label histogram kernel KEH(G, G′) is defined as the linear kernel between g and g′, for respective histograms: KEH(G, G′) = ⟨g, g′⟩= ∑s i=1 gig′ i. Finally, we introduce the vertex-edge label histogram. Let h = (h111, h211, . . . , hsss) be a histogram vector, such that hijk = |{(u, v) ∈E | φ(u, v) = i, φ(u) = j, φ(v) = k}| for each i, j, k ∈Σ. The vertex-edge label histogram kernel KVEH(G, G′) is defined as the linear kernel between h and h′ for the respective histograms of G and G′: KVEH(G, G′) = ⟨h, h′⟩= ∑s i,j,k=1 hijkh′ ijk. Notice that KVEH(G, G′) = KEH(G, G′) if vertices are not labeled. From the definition of the direct product of graphs, we can confirm the following relationships between histogram kernels and the random walk kernel. Lemma 1 For a pair of graphs G, G′ and their direct product G×, we have KVH(G, G′) = 1 λ0 K0 ×(G, G′) = |V×|. KVEH(G, G′) = 1 λ1 K1 ×(G, G′) −λ0 λ1 K0 ×(G, G′) = |V×| ∑ i,j=1 [A×]ij. Proof. The first equation KVH(G, G′) = |V×| can be proven from the following: KVH(G, G′) = ∑ v∈V |{ v′ ∈V ′ | φ(v) = φ′(v′) }| = |{ (v, v′) ∈V × V ′ | φ(v) = φ′(v′) }| = |V×| = 1 λ0 K0 ×(G, G′). We can prove the second equation in a similar fashion: KVEH(G, G′) = 2 ∑ (u,v)∈E |{ (u′, v′) ∈E′ | φ(u, v) = φ′(u′, v′), φ(u) = φ′(u′), φ(v) = φ′(v′) }| = 2 { ( (u, v), (u′, v′) ) ∈E × E′ φ(u, v) = φ′(u′, v′), φ(u) = φ′(u′), φ(v) = φ′(v′) } = 2|E×| = |V×| ∑ i,j=1 [A×]ij = 1 λ1 K1 ×(G, G′) −λ0 λ1 K0 ×(G, G′). ■ 4 Finally, let us define a new kernel KH(G, G′) := KVH(G, G′) + λKVEH(G, G′) (1) with a parameter λ. From Lemma 1, since KH(G, G′) = K1 ×(G, G′) holds if λ0 = 1 and λ1 = λ in the one-step random walk kernel K1 ×, we have the following relationship from Theorem 1. Corollary 3 For a pair of graphs G and G′, we have KH(G, G′) ≤KGR(G, G′) ≤KH(G, G′) + ε, where ε is given in Theorem 1. To summarize, our results show that if the parameter λ of the geometric random walk kernel KGR is small enough, random walks halt, and KGR reduces to KH, which finally converges to KVH. This is based on vertex histograms only and completely ignores the topological structure of the graphs. 3 Experiments We empirically examine the halting phenomenon of the geometric random walk kernel on popular real-world graph benchmark datasets and semi-simulated graph data. 3.1 Experimental Setup Environment. We used Amazon Linux AMI release 2015.03 and ran all experiments on a single core of 2.5 GHz Intel Xeon CPU E5-2670 and 244 GB of memory. All kernels were implemented in C++ with Eigen library and compiled with gcc 4.8.2. Datasets. We collected five real-world graph classification benchmark datasets:1 ENZYMES, NCI1, NCI109, MUTAG, and D&D, which are popular in the graph-classification literature [13, 14]. ENZYMES and D&D are proteins, and NCI1, NCI109, and MUTAG are chemical compounds. Statistics of these datasets are summarized in Table 1, in which we also show the maximum of maximum degrees of product graphs maxG,G′∈G ∆× for each dataset G. We consistently used λmax = (maxG,G′∈G ∆×)−1 as the upper bound of λ in geometric random walk kernels, in which the gap was less than one order as the lower bound of λ. The average degree of the product graph, the lower bound of λ, were 18.17, 7.93, 5.60, 6.21, and 13.31 for ENZYMES, NCI1, NCI109, MUTAG, and DD, respectively. Kernels. We employed the following graph kernels in our experiments: We used linear kernels on vertex label histograms KVH, edge label histograms KEH, vertex-edge label histograms KVEH, and the combination KH introduced in Equation (1). We also included a Gaussian RBF kernel between vertex-edge label histograms, denoted as KVEH,G. From the family of random walk kernels, we used the geometric random walk kernel KGR and the k-step random walk kernel Kk ×. Only the number k of steps were treated as a parameter in Kk × and λk was fixed to 1 for all k. We used fix-point iterations [14, Section 4.3] for efficient computation of KGR. Moreover, we employed the Weisfeiler-Lehman subtree kernel [13], denoted as KWL, as the state-of-the-art graph kernel, which has a parameter h of the number of iterations. 3.2 Results on Real-World Datasets We first compared the geometric random walk kernel KGR to other kernels in graph classification. The classification accuracy of each graph kernel was examined by 10-fold cross validation with multiclass C-support vector classification (libsvm2 was used), in which the parameter C for CSVC and a parameter (if one exists) of each kernel were chosen by internal 10-fold cross validation (CV) on only the training dataset. We repeated the whole experiment 10 times and reported average 1The code and all datasets are available at: http://www.bsse.ethz.ch/mlcb/research/machine-learning/graph-kernels.html 2http://www.csie.ntu.edu.tw/˜cjlin/libsvm/ 5 Table 1: Statistics of graph datasets, |ΣV | and |ΣE| denote the number of vertex and edge labels. Dataset Size #classes avg.|V | avg.|E| max|V | max|E| |ΣV | |ΣE| max∆× ENZYMES 600 6 32.63 62.14 126 149 3 1 65 NCI1 4110 2 29.87 32.3 111 119 37 3 16 NCI109 4127 2 29.68 32.13 111 119 38 3 17 MUTAG 188 2 17.93 19.79 28 33 7 11 10 D&D 1178 2 284.32 715.66 5748 14267 82 1 50 20 30 40 50 20 30 40 50 20 30 40 50 10–5 10–4 10–3 10–2 KGR KH Accuracy Accuracy Accuracy Parameter λ Number of steps k KVH KEH KVEH KVEH,G KGR KWL Kx k KH Label histogram Random walk Comparison of KGR with KH k-step Kx k Comparison of various graph kernels (i) (ii) (iii) 1 3 5 7 9 (a) ENZYMES 65 70 75 80 85 0.0625 65 70 75 80 85 65 70 75 80 85 10–5 10–4 10–3 10–2 KGR KH Accuracy Accuracy Accuracy Parameter λ Number of steps k KVH KEH KVEH KVEH,G KGR KWL Kx k KH Label histogram Random walk Comparison of KGR with KH k-step Kx k Comparison of various graph kernels (i) (ii) (iii) 1 3 5 7 9 (b) NCI1 65 70 75 80 85 0.0588 65 70 75 80 85 65 70 75 80 85 10–5 10–4 10–3 10–2 KGR KH Accuracy Accuracy Accuracy Parameter λ Number of steps k KVH KEH KVEH KVEH,G KGR KWL Kx k KH Label histogram Random walk Comparison of KGR with KH k-step Kx k Comparison of various graph kernels (i) (ii) (iii) 1 3 5 7 9 (c) NCI109 Figure 1: Classification accuracy on real-world datasets (Means ± SD). classification accuracies with their standard errors. The list of parameters optimized by the internal CV is as follows: C ∈{2−7, 2−5, . . . , 25, 27} for C-SVC, the width σ ∈{10−2, . . . , 102} in the RBF kernel KVEH,G, the number of steps k ∈{1, . . . , 10} in Kk ×, the number of iterations h ∈{1, . . . , 10} in KWL, and λ ∈{10−5, . . . , 10−2, λmax} in KH and KGR, where λmax = (maxG,G′∈G ∆×)−1. Results are summarized in the left column of Figure 1 for ENZYMES, NCI1, and NCI109. We present results on MUTAG and D&D in the Supplementary Notes, as different graph kernels do not give significantly different results (e.g., [13]). Overall, we could observe two trends. First, the Weisfeiler-Lehman subtree kernel KWL was the most accurate, which confirms results in [13], 6 Percentage −4 −3 −2 −1 0 1 2 0 10 20 30 40 50 Percentage −4 −3 −2 −1 0 1 2 0 10 20 30 40 50 Percentage −4 −3 −2 −1 0 1 2 0 10 20 30 40 50 ENZYMES NCI1 NCI109 (a) (b) (c) log10 ε’ log10 ε’ log10 ε’ Figure 2: Distribution of log10 ε′, where ε′ is defined in Corollary 1, in real-world datasets. Number of added edges 0 1020 50 100 65 70 75 80 Number of added edges 0 1020 50 100 65 70 75 80 Accuracy Accuracy Sim-ENZYMES Sim-NCI1 Sim-NCI109 KGR KH KVH KGR KH KVH KGR KH KVH (a) (b) (c) Number of added edges 0 1020 50 100 30 35 40 45 50 Accuracy 25 Figure 3: Classification accuracy on semi-simulated datasets (Means ± SD). Second, the two random walk kernels KGR and Kk × show greater accuracy than na¨ıve linear kernels on edge and vertex histograms, which indicates that halting is not occurring in these datasets. It is also noteworthy that employing a Gaussian RBF kernel on vertex-edge histograms leads to a clear improvement over linear kernels on all three datasets. On ENZYMES, the Gaussian kernel is even on par with the random walks in terms of accuracy. To investigate the effect of halting in more detail, we show the accuracy of KGR and KH in the center column of Figure 1 for various choices of λ, from 10−5 to its upper bound. We can clearly see that halting occurs for small λ, which greatly affects the performance of KGR. More specifically, if it is chosen to be very small (smaller than 10−3 in our datasets), the accuracies are close to the na¨ıve baseline KH that ignores the topological structure of graphs. However, accuracies are much closer to that reached by the Weisfeiler-Lehman kernel if λ is close to its theoretical maximum. Of course, the theoretical maximum of λ depends on unseen test data in reality. Therefore, we often have to set λ conservatively so that we can apply the trained model to any unseen graph data. Moreover, we also investigated the accuracy of the random walk kernel as a function of the number of steps k of the random walk kernel Kk ×. Results are shown in the right column of Figure 1. In all datasets, accuracy improves with each step, up to four to five steps. The optimal number of steps in Kk × and the maximum λ give similar accuracy levels. We also confirmed Theorem 1 that conservative choices of λ (10−3 or less) give the same accuracy as a one-step random walk. In addition, Figure 2 shows histograms of log10 ε′, where ε′ is given in Corollary 1 for λ = (max ∆×)−1 for all pairs of graphs in the respective datasets. The value ε′ can be viewed as the deviation of KGR from KH in percentages. Although ε′ is small on average (about 0.1 percent in ENZYMES and NCI datasets), we confirmed the existence of relatively large ε′ in the plot (more than 1 percent), which might cause the difference between KGR and KH. 3.3 Results on Semi-Simulated Datasets To empirically study halting, we generated semi-simulated graphs from our three benchmark datasets (ENZYMES, NCI1, and NCI109) and compared the three kernels KGR, KH, and KVH. In each dataset, we artificially generated denser graphs by randomly adding edges, in which the number of new edges per graph was determined from a normal distribution with the mean 7 m ∈{10, 20, 50, 100} and the distribution of edge labels was unchanged. Note that the accuracy of the vertex histogram kernel KVH stays always the same, as we only added edges. Results are plotted in Figure 3. There are two key observations. First, by adding new false edges to the graphs, the accuracy levels drop for both the random walk kernel and the histogram kernel. However, even after adding 100 new false edges per graph, they are both still better than a na¨ıve classifier that assigns all graphs to the same class (accuracy of 16.6 percent on ENZYMES and approximately 50 percent on NCI1 and NCI109). Second, the geometric random walk kernel quickly approaches the accuracy level of KH when new edges are added. This is a strong indicator that halting occurs. As graphs become denser, the upper bound for λ gets smaller, and the accuracy of the geometric random walk kernel KGR rapidly drops and converges to KH. This result confirms Corollary 3, which says that both KGR and KH converge to KVH as λ goes to 0. 4 Discussion In this work, we show when and where the phenomenon of halting occurs in random walk kernels. Halting refers to the fact that similarity measures based on counting walks (of potentially infinite length) often downweight longer walks so much that the similarity score is completely dominated by walks of length 1, degenerating the random walk kernel to a simple kernel between edges and vertices. While it had been conjectured that this problem may arise in graph kernels [1], we provide the first theoretical proof and empirical demonstration of the occurrence and extent of halting in geometric random walk kernels. We show that the difference between geometric random walk kernels and simple edge kernels depends on the maximum degree of the graphs being compared. With increasing maximum degree, the difference converges to zero. We empirically demonstrate on simulated graphs that the comparison of graphs with high maximum degrees suffers from halting. On real graph data from popular graph classification benchmark datasets, the maximum degree is so low that halting can be avoided if the decaying weight λ is set close to its theoretical maximum. Still, if λ is set conservatively to a low value to ensure convergence, halting can clearly be observed, even on unseen test graphs with unknown maximum degrees. There is an interesting connection between halting and tottering [1, Section 2.1.5], a weakness of random walk kernels described more than a decade ago [11]. Tottering is the phenomenon that a walk of infinite length may go back and forth along the same edge, thereby creating an artificially inflated similarity score if two graphs share a common edge. Halting and tottering seem to be opposing effects. If halting occurs, the effect of tottering is reduced and vice versa. Halting downweights these tottering walks and counteracts the inflation of the similarity scores. An interesting point is that the strategies proposed to remove tottering from walk kernels did not lead to a clear improvement in classification accuracy [11], while we observed a strong negative effect of halting on the classification accuracy in our experiments (Section 3). This finding stresses the importance of studying halting. Our theoretical and empirical results have important implications for future applications of random walk kernels. First, if the geometric random walk kernel is used on a graph dataset with known maximum degree, λ should be close to the theoretical maximum. Second, simple baseline kernels based on vertex and edge label histograms should be employed to check empirically if the random walk kernel gives better accuracy results than these baselines. Third, particularly in datasets with high maximum degree, we advise using a fixed-length-k random walk kernel rather than a geometric random walk kernel. Optimizing the length k by cross validation on the training dataset led to competitive or superior results compared to the geometric random walk kernel in all of our experiments. Based on these results and the fact that by definition the fixed-length kernel does not suffer from halting, we recommend using the fixed-length random walk kernel as a comparison method in future studies on novel graph kernels. Acknowledgments. This work was supported by JSPS KAKENHI Grant Number 26880013 (MS), the Alfried Krupp von Bohlen und Halbach-Stiftung (KB), the SNSF Starting Grant ‘Significant Pattern Mining’ (KB), and the Marie Curie Initial Training Network MLPM2012, Grant No. 316861 (KB). 8 References [1] Borgwardt, K. M. Graph Kernels. PhD thesis, Ludwig-Maximilians-University Munich, 2007. [2] Borgwardt, K. M., Ong, C. S., Sch¨onauer, S., Vishwanathan, S. V. N., Smola, A. J., and Kriegel, H.-P. Protein function prediction via graph kernels. Bioinformatics, 21(suppl 1):i47–i56, 2005. [3] Brualdi, R. A. The Mutually Beneficial Relationship of Graphs and Matrices. AMS, 2011. [4] Costa, F. and Grave, K. D. Fast neighborhood subgraph pairwise distance kernel. In Proceedings of the 27th International Conference on Machine Learning (ICML), 255–262, 2010. [5] G¨artner, T., Flach, P., and Wrobel, S. On graph kernels: Hardness results and efficient alternatives. In Learning Theory and Kernel Machines (LNCS 2777), 129–143, 2003. [6] Girvan, M. and Newman, M. E. J. Community structure in social and biological networks. Proceedings of the National Academy of Sciences (PNAS), 99(12):7821–7826, 2002. [7] Kashima, H., Tsuda, K., and Inokuchi, A. Marginalized kernels between labeled graphs. In Proceedings of the 20th International Conference on Machine Learning (ICML), 321–328, 2003. [8] Katz, L. A new status index derived from sociometric analysis. Psychometrika, 18(1):39–43, 1953. [9] Kriege, N., Neumann, M., Kersting, K., and Mutzel, P. Explicit versus implicit graph feature maps: A computational phase transition for walk kernels. In Proceedings of IEEE International Conference on Data Mining (ICDM), 881–886, 2014. [10] Liben-Nowell, D. and Kleinberg, J. The link-prediction problem for social networks. Journal of the American Society for Information Science and Technology, 58(7):1019–1031, 2007. [11] Mah´e, P., Ueda, N., Akutsu, T., Perret, J.-L., and Vert, J.-P. Extensions of marginalized graph kernels. In Proceedings of the 21st International Conference on Machine Learning (ICML), 2004. [12] Shervashidze, N. and Borgwardt, K. M. Fast subtree kernels on graphs. In Advances in Neural Information Processing Systems (NIPS) 22, 1660–1668, 2009. [13] Shervashidze, N., Schweitzer, P., van Leeuwen, E. J., Mehlhorn, K., and Borgwardt, K. M. Weisfeiler-Lehman graph kernels. Journal of Machine Learning Research, 12:2359–2561, 2011. [14] Vishwanathan, S. V. N., Schraudolph, N. N., Kondor, R., and Borgwardt, K. M. Graph kernels. Journal of Machine Learning Research, 11:1201–1242, 2010. 9 | 2015 | 84 |
5,977 | Kullback-Leibler Proximal Variational Inference Mohammad Emtiyaz Khan∗ Ecole Polytechnique F´ed´erale de Lausanne Lausanne, Switzerland emtiyaz@gmail.com Pierre Baqu´e∗ Ecole Polytechnique F´ed´erale de Lausanne Lausanne, Switzerland pierre.baque@epfl.ch Franc¸ois Fleuret Idiap Research Institute Martigny, Switzerland francois.fleuret@idiap.ch Pascal Fua Ecole Polytechnique F´ed´erale de Lausanne Lausanne, Switzerland pascal.fua@epfl.ch Abstract We propose a new variational inference method based on a proximal framework that uses the Kullback-Leibler (KL) divergence as the proximal term. We make two contributions towards exploiting the geometry and structure of the variational bound. First, we propose a KL proximal-point algorithm and show its equivalence to variational inference with natural gradients (e.g., stochastic variational inference). Second, we use the proximal framework to derive efficient variational algorithms for non-conjugate models. We propose a splitting procedure to separate non-conjugate terms from conjugate ones. We linearize the non-conjugate terms to obtain subproblems that admit a closed-form solution. Overall, our approach converts inference in a non-conjugate model to subproblems that involve inference in well-known conjugate models. We show that our method is applicable to a wide variety of models and can result in computationally efficient algorithms. Applications to real-world datasets show comparable performances to existing methods. 1 Introduction Variational methods are a popular alternative to Markov chain Monte Carlo (MCMC) methods for Bayesian inference. They have been used extensively for their speed and ease of use. In particular, methods based on the evidence lower bound optimization (ELBO) are quite popular because they convert a difficult integration problem to an optimization problem. This reformulation enables the application of optimization techniques for large-scale Bayesian inference. Recently, an approach called stochastic variational inference (SVI) has gained popularity for inference in conditionally-conjugate exponential family models [1]. SVI exploits the geometry of the posterior distribution by using natural gradients and uses a stochastic method to improve scalability. The resulting updates are simple and easy to implement. Several generalizations of SVI have been proposed for general latent-variable models where the lower bound might be intractable [2, 3, 4]. These generalizations, although important, do not take the geometry of the posterior distribution into account. In addition, none of these approaches exploit the structure of the lower bound. In practice, not all factors of the joint distribution introduce difficulty in the optimization. It is therefore desirable to treat “difficult” terms differently from “easy” terms. ∗A note on contributions: P. Baqu´e proposed the use of the KL proximal term and showed that the resulting proximal steps have closed-form solutions. The rest of the work was carried out by M. E. Khan. 1 In this context, we propose a splitting method for variational inference; this method exploits both the structure and the geometry of the lower bound. Our approach is based on the proximal-gradient framework. We make two important contributions. First, we propose a proximal-point algorithm that uses the Kullback-Leibler (KL) divergence as the proximal term. We show that the addition of this term incorporates the geometry of the posterior distribution. We establish the equivalence of our approach to variational methods that use natural gradients (e.g., [1, 5, 6]). Second, following the proximal-gradient framework, we propose a splitting approach for variational inference. In this approach, we linearize difficult terms such that the resulting optimization problem is easy to solve. We apply this approach to variational inference on non-conjugate models. We show that linearizing non-conjugate terms leads to subproblems that have closed-form solutions. Our approach therefore converts inference in a non-conjugate model to subproblems that involve inference in well-known conjugate models, and for which efficient implementation exists. 2 Latent Variable Models and Evidence Lower-Bound Optimization Consider a general latent-variable model with data vector y of length N and the latent vector z of length D, following a joint distribution p(y, z) (we drop the parameters of the distribution from the notation). ELBO approximates the posterior p(z|y) by a distribution q(z|λ) that maximizes a lower bound to the marginal likelihood. Here, λ is the vector of parameters of the distribution q. As shown in (1), the lower bound is obtained by first multiplying and then dividing by q(z|λ), and then applying Jensen’s inequality by using concavity of log. The approximate posterior q(z|λ) is obtained by maximizing the lower bound with respect to λ. log p(y) = log Z q(z|λ)p(y, z) q(z|λ) dz ≥max λ Eq(z|λ) log p(y, z) q(z|λ) := L(λ). (1) Unfortunately, the lower bound may not always be easy to optimize, e.g., some terms in the lower bound might be intractable or might admit a form that is not easy to optimize. In addition, the optimization can be slow when N and D are large. 3 The KL Proximal-Point Algorithm for Conjugate Models In this section, we introduce a proximal-point method based on Kullback-Leibler (KL) proximal function and establish its relation to the existing approaches based on natural gradients [1, 5, 6]. In particular, for conditionally-conjugate exponential-family models, we show that each iteration of our proximal-point approach is equivalent to a step along the natural gradient. The Kullback-Leibler (KL) divergence between two distributions q(z|λ) and q(z|λ′) is defined as follows: DKL[q(z|λ) ∥q(z|λ′)] := Eq(z|λ)[log q(z|λ) −log q(z|λ′)]. Using the KL divergence as the proximal term, we introduce a proximal-point algorithm that generates a sequence of λk by solving the following subproblems: KL Proximal-Point : λk+1 = arg max λ L(λ) −1 βk DKL[q(z|λ) ∥q(z|λk)], (2) given an initial value λ0 and a bounded sequence of step-size βk > 0, One benefit of using the KL term is that it takes the geometry of the posterior distribution into account. This fact has lead to their extensive use in both the optimization and statistics literature, e.g., for speeding up the expectation-maximization algorithm [7, 8], for convex optimization [9], for message-passing in graphical models [10], and for approximate Bayesian inference [11, 12, 13]. Relationship to the methods that use natural gradients: An alternative approach to incorporate the geometry of the posterior distribution is to use natural gradients [6, 5, 1]. We now establish its relationship to our approach. The natural gradient can be interpreted as finding a descent direction that ensures a fixed amount of change in the distribution. For variational inference, this is equivalent to the following [1, 14]: arg max ∆λ L(λk + ∆λ), s.t. Dsym KL [q(z|λk + ∆λ) ∥q(z|λk)] ≤ϵ, (3) 2 where Dsym KL is the symmetric KL divergence. It appears that the proximal-point subproblem (2) is related to a Lagrangian of the above optimization. In fact, as we show below, the two problems are equivalent for conditionally conjugate exponential-family models. We consider the set-up described in [15], which is a bit more general than that of [1]. Consider a Bayesian network with nodes zi and a joint distribution Q i p(zi|pai) where pai are the parents of zi. We assume that each factor is an exponential-family distribution defined as follows: p(zi|pai) := hi(zi) exp ηT i (pai)Ti(zi) −Ai(ηi) , (4) where ηi is the natural parameter, Ti(zi) is the sufficient statistics, Ai(ηi) is the partition function and hi(zi) is the base measure. We seek a factorized approximation shown in (5), where each zi belongs to the same exponential-family distribution as the joint distribution. The parameters of this distribution are denoted by λi to differentiate them from the joint-distribution parameters ηi. Also note that the subscript refers to the factor i, not to the iteration. q(z|λ) = Y i qi(zi|λi), where qi(zi) := hi(z) exp h λT i Ti(zi) −Ai(λi) i . (5) For this model, we show the following equivalence between a gradient-descent method based on natural gradients and our proximal-point approach. The proof is given in the supplementary material. Theorem 1. For the model shown in (4) and the posterior approximation shown in (5), the sequence λk generated by the proximal-point algorithm of (2) is equal to the one obtained using gradientdescent along the natural gradient with step lengths βk/(1 + βk). Proof of convergence : Convergence of the proximal-point algorithm shown in (2) is proved in [8]. We give a summary of the results here. We assume βk = 1, however the proof holds for any bounded sequence of βk. Let the space of all λ be denoted by S. Define the set S0 := {λ ∈S : L(λ) ≥L(λ0)}. Then, ∥λk+1 −λk∥→0 under the following conditions: (A) Maximum of L exist and the gradient of L is continuous and defined in S0. (B) The KL divergence and its gradient are continuous and defined in S0 × S0. (C) DKL[q(z|λ) ∥q(z|λ′)] = 0 only when λ′ = λ. In our case, the conditions (A) and (B) are either assumed or satisfied, and the condition (C) can be ensured by choosing an appropriate parameterization of q. 4 The KL Proximal-Gradient Algorithm for Non-conjugate Models The proximal-point algorithm of (2) might be difficult to optimize for non-conjugate models, e.g., due to the non-conjugate factors. In this section, we present an algorithm based on the proximalgradient framework where we first split the objective function into “difficult” and “easy” terms, and then, to simplify the optimization, linearize the difficult term. See [16] for a good review of proximal methods for machine learning. We split the ratio p(y, z)/q(z|λ) ≡c ˜pd(z|λ)˜pe(z|λ), where ˜pd contains all factors that make the optimization difficult, and ˜pe contains the rest (c is a constant). This results in the following split: L(λ) = Eq(z|λ) log p(y, z|θ) q(z|λ) := Eq(z|λ)[log ˜pd(z|λ)] | {z } f(λ) + Eq(z|λ)[log ˜pe(z|λ)] | {z } h(λ) + log c, (6) Note that ˜pd and ˜pe can be un-normalized factors in the distribution. In the worst case, we can set ˜pe(z|λ) ≡1 and take the rest as ˜pd(z|λ). We give an example of the split in the next section. The main idea is to linearize the difficult term f such that the resulting problem admits a simple form. Specifically, we use a proximal-gradient algorithm that solves the following sequence of subproblems to maximize L as shown below. Here, ▽f(λk) is the gradient of f at λk. KL Proximal-Gradient: λk+1 = arg max λ λT ▽f(λk) + h(λ) −1 βk DKL[q(z|λ) ∥q(z|λk)]. (7) 3 Note that our linear approximation is equivalent to the one used in gradient descent. Also, the approximation is tight at λk. Therefore, it does not introduce any error in the optimization, rather it only acts as a surrogate to take the next step. Existing variational methods have used approximations such as ours, e.g., see [17, 18, 19]. Most of these methods first approximate the log ˜pd(z|λ) term by using a linear or quadratic approximation and then compute the expectation. As a result the approximation is not tight and can result in a bad performance [20]. In contrast, our approximation is applied directly to E[log ˜pd(z|λ)] and therefore is tight at λk. The convergence of our approach is covered under the results shown in [21]; they prove convergence of an algorithm more general algorithm than ours. Below, we summarize the results. As before, we assume that the maximum exists and L is continuous. We make three additional assumptions. First, the gradient of f is L-Lipschitz continuous in S, i.e., ||▽f(λ)−▽f(λ′)|| ≤L||λ−λ′||, ∀λ, λ′ ∈ S. Second, the function h is concave. Third, there exists an α > 0 such that, (λk+1 −λk)T ▽1 DKL[q(z|λk+1) ∥q(z|λk)] ≥α∥λk+1 −λk∥2, (8) where ▽1 denotes the gradient with respect to the first argument. Under these conditions, ∥λk+1 − λk∥→0 when 0 < βk < α/L. The choice of constant α is also discussed in [21]. Note that even though h is required to be concave, f could be non-convex. The lower bound usually contains concave terms, e.g., in the entropy term. In the worst case when there are no concave terms, we can simply choose h ≡0. 5 Examples of KL Proximal-Gradient Variational Inference In this section, we show a few examples where the subproblem (7) has a closed-form solution. Generalized linear model : We consider the generalized linear model shown in (9). Here, y is the output vector (of length N) whose n’th entry is equal to yn, whereas X is an N × D feature matrix that contains feature vectors xT n as rows. The weight vector z is a Gaussian with mean µ and covariance Σ. To obtain the probability of yn, the linear predictor xT nz is passed through p(yn|·). p(y, z) := N Y n=1 p(yn|xT nz)N(z|µ, Σ). (9) We restrict the posterior distribution to be a Gaussian q(z|λ) = N(z|m, V) with mean m and covariance V, therefore λ := {m, V}. For this posterior family, the non-Gaussian terms p(yn|xT nz) are difficult to handle, while the Gaussian term N(z|µ, Σ) is easy because it is conjugate to q. Therefore, we set ˜pe(z|λ) ≡N(z|µ, Σ)/N(z|m, V) and let the rest of the terms go in ˜pd. By substituting in (6) and using the definition of the KL divergence, we get the lower bound shown below in (10). The first term is the function f that will be linearized, and the second term is the function h. L(m, V) := N X n=1 Eq(z|λ)[log p(yn|xT nz)] | {z } f(m,V ) + Eq(z|λ) log N(z|µ, Σ) N(z|m, V) | {z } h(m,V ) . (10) For linearization, we compute the gradient of f using the chain rule. Denote fn( emn, evn) := Eq(z|λ)[log p(yn|xT nz)] where emn := xT nm and evn := xT nVxn. Gradients of f w.r.t. m and V can then be expressed in terms of gradients of fn w.r.t. emn and evn: ▽mf(m, V) = N X n=1 xn ▽e mn fn( emn, evn), ▽Vf(m, V) = N X n=1 xnxT n ▽evn fn( emn, evn), (11) For notational simplicity, we denote the gradient of fn at emnk := xT nmk and evnk := xT nVkxn by, αnk := −▽e mn fn( emnk, evnk), γnk := −2 ▽evn fn( emnk, evnk). (12) Using (11) and (12), we get the following linear approximation of f: f(m, V) ≈λT ▽f(λk) := mT [▽mf(mk, Vk)] + Tr [V {▽Vf(mk, Vk)}] (13) = − N X n=1 αnk (xT nm) + 1 2γnk (xT nVxn) . (14) 4 Substituting the above in (7), we get the following subproblem in the k’th iteration: (mk+1, Vk+1) = arg max m,V ≻0 − N X n=1 αnk (xT nm) + 1 2γnk (xT nVxn) + Eq(z|λ) N(z|µ, Σ) N(z|m, V) −1 βk DKL [N(z|m, V)||N(z|mk, Vk)] , (15) Taking the gradient w.r.t. m and V and setting it to zero, we get the following closed-form solutions (details are given in the supplementary material): V−1 k+1 = rkV−1 k + (1 −rk) h Σ−1 + XT diag(γk)X i , (16) mk+1 = (1 −rk)Σ−1 + rkV−1 k −1 h (1 −rk)(Σ−1µ −XT αk) + rkV−1 k mk i , (17) where rk := 1/(1 + βk) and αk and γk are vectors of αnk and γnk respectively, for all k. Computationally efficient updates : Even though the updates are available in closed form, they are not efficient when dimensionality D is large. In such a case, an explicit computation of V is costly because the resulting D × D matrix is extremely large. We now derive efficient updates that avoids an explicit computation of V. Our derivation involves two key steps. The first step is to show that Vk+1 can be parameterized by γk. Specifically, if we initialize V0 = Σ, then we can show that: Vk+1 = h Σ−1 + XT diag(eγk+1)X i−1 , where eγk+1 = rkeγk + (1 −rk)γk. (18) with eγ0 = γ0. A detailed derivation is given in the supplementary material. The second key step is to express the updates in terms of emn and evn. For this purpose, we define some new quantities. Let em be a vector whose n’th entry is emn. Similarly, let ev be the vector of evn for all n. Denote the corresponding vectors in the k’th iteration by emk and evk, respectively. Finally, define eµ = Xµ and eΣ = XΣXT . Now, by using the fact that em = Xm and ev = diag(XVXT ) and by applying the Woodbury matrix identity, we can express the updates in terms of em and ev, as shown below (a detailed derivation is given in the supplementary material): emk+1 = emk + (1 −rk)(I −eΣB−1 k )(eµ −emk −eΣαk), where Bk := eΣ + [diag(rkeγk)]−1, evk+1 = diag(eΣ) −diag(eΣA−1 k eΣ), where Ak := eΣ + [diag(eγk)]−1. (19) Note that these updates depend on eµ, eΣ, αk, and γk (whose size only depends on N and is independent of D). Most importantly, these updates avoid an explicit computation of V and only require storing emk and evk, both of which scale linearly with N. Also note that the matrix Ak and Bk differ only slightly and we can reduce computation by using Ak in place of Bk. In our experiments, this does not create any convergence issues. To assess convergence, we can use the optimality condition. By taking the norm of the derivative of L at mk+1 and Vk+1 and simplifying, we get the following criteria: ∥eµ −emk+1 −eΣαk+1∥2 2 + Tr[eΣ diag(eγk −γk+1 −1) eΣ] ≤ϵ, for some ϵ > 0 (derivation is in the supplementary material). Linear-Basis Function Model and Gaussian Process : The algorithm presented above can be extended to linear-basis function models by using the weight-space view presented in [22]. Consider a non-linear basis function φ(x) that maps a D-dimensional feature vector into an N-dimensional feature space. The generalized linear model of (9) is extended to a linear basis function model by replacing xT nz with the latent function g(x) := φ(x)T z. The Gaussian prior on z then translates to a kernel function κ(x, x′) := φ(x)T Σφ(x) and a mean function eµ(x) := φ(x)T µ in the latent function space. Given input vectors xn, we define the kernel matrix eΣ whose (i, j)’th entry is equal to κ(xi, xj) and the mean vector eµ whose i’th entry is eµ(xi). Assuming a Gaussian posterior distribution over the latent function g(x), we can compute its mean em(x) and variance ev(x) using the proximal-gradient algorithm. We define em to be the vector of 5 Algorithm 1 Proximal-gradient algorithm for linear basis function models and Gaussian process Given: Training data (y, X), test data x∗, kernel mean eµ, covariance eΣ, step-size sequence rk, and threshold ϵ. Initialize: em0 ←eµ, ev0 ←diag(eΣ) and eγ0 ←δ11. repeat For all n in parallel: αnk ←▽e mnfn( emnk, evnk) and γnk ←▽evnfn( emnk, evnk). Update emk and evk using (19). eγk+1 ←rkeγk + (1 −rk)γk. until ∥eµ −emk −eΣαk∥+ Tr[eΣ diag(eγk −γk+1 −1)eΣ] > ϵ. Predict test inputs x∗using (20). em(xn) for all n and similarly ev to be the vector of all ev(xn). Following the same derivation as the previous section, we can show that the updates of (19) give us the posterior mean em and variance ev. These updates are the kernalized version of (16) and (17). For prediction, we only need the converged value of αk and γk, denoted by α∗and γ∗, respectively. Given a new input x∗, define κ∗∗:= κ(x∗, x∗) and κ∗to be a vector whose n’th entry is equal to κ(xn, x∗). The predictive mean and variance can be computed as shown below: ev(x∗) = κ∗∗−κT ∗[eΣ + (diag(eγ∗))−1]−1κ∗ , em(x∗) = eµ∗−κT ∗α∗ (20) A pseudo-code is given in Algorithm 1. Here, we initialize eγ to a small constant δ1, otherwise solving the first equation might be ill-conditioned. These updates also work for the Gaussian process (GP) models with a kernel k(x, x′) and mean function eµ(x), and for many other latent Gaussian models such as matrix factorization models. 6 Experiments and Results We now present some results on the real data. Our goal is to show that our approach gives comparable results to existing methods and is easy to implement. We also show that, in some cases, our method is significantly faster than the alternatives due to the kernel trick. We show results on three models: Bayesian logistic regression, GP classification with logistic likelihood, and GP regression with Laplace likelihood. For these likelihoods, expectations can be computed (almost) exactly, for which we used the methods described in [23, 24]. We use a fixed step-size of βk = 0.25 and 1 for logistic and Laplace likelihoods, respectively. We consider three datasets for each model. A summary is given in Table 1. These datasets can be found at the data repository1 of LIBSVM and UCI. Bayesian Logistic Regression: Results for Bayesian logistic regression are shown in Table 2. We consider two datasets. For ‘a1a’, N > D, and, for ‘Colon’, N < D. We compare our ‘proximal’ method to three other existing methods: the ‘MAP’ method which finds the mode of the penalized log-likelihood, the ‘Mean-Field’ method where the distribution is factorized across dimensions, and the ‘Cholesky’ method of [25]. We implemented these methods using ‘minFunc’ software by Mark Schmidt2. We used L-BFGS for optimization. All algorithms are stopped when optimality condition is below 10−4. We set the Gaussian prior to Σ = δI and µ = 0. To set the hyperparameter δ, we use cross-validation for MAP, and maximum marginal-likelihood estimate for the rest of the methods. As we compare running times as well, we use a common range of hyperparameter values for all methods. These values are shown in Table 1. For Bayesian methods, we report the negative of the marginal likelihood approximation (‘Neg-LogLik’). This is (the negative of) the value of the lower bound at the maximum. We also report the log-loss computed as follows:−P n log ˆpn/N where ˆpn are the predictive probabilities of the test data and N is the total number of test-pairs. A lower value is better and a value of 1 is equivalent to random coin-flipping. In addition, we report the total time taken for hyperparameter selection. 1https://archive.ics.uci.edu/ml/datasets.html and http://www.csie.ntu.edu.tw/∼cjlin/libsvmtools/datasets/ 2Available at https://www.cs.ubc.ca/∼schmidtm/Software/minFunc.html 6 Model Dataset N D %Train #Splits Hyperparameter range LogReg a1a 32,561 123 5% 1 δ = logspace(-3,1,30) Colon 62 2000 50% 10 δ = logspace(0,6,30) GP class Ionosphere 351 34 50% 10 for all datasets Sonar 208 60 50% 10 log(l) = linspace(-1,6,15) USPS-3vs5 1,540 256 50% 5 log(σ) = linspace(-1,6,15) GP reg Housing 506 13 50% 10 log(l) = linspace(-1,6,15) Triazines 186 60 50% 10 log(σ) = linspace(-1,6,15) Space ga 3,106 6 50% 1 log(b) = linspace(-5,1,2) Table 1: A list of models and datasets. %Train is the % of training data. The last column shows the hyperparameters values (‘linspace’ and ‘logspace’ refer to Matlab commands). Dataset Methods Neg-Log-Lik Log Loss Time a1a MAP — 0.499 27s Mean-Field 792.8 0.505 21s Cholesky 590.1 0.488 12m Proximal 590.1 0.488 7m Colon MAP — 0.78 (0.01) 7s (0.00) Mean-Field 18.35 (0.11) 0.78 (0.01) 15m (0.04) Proximal 15.82 (0.13) 0.70 (0.01) 18m (0.14) Table 2: A summary of the results obtained on Bayesian logistic regression. In all columns, a lower values implies better performance. For MAP, this is the total cross-validation time, whereas for Bayesian methods it is the time taken to compute ‘Neg-Log-Lik’ for all hyperparameters values over the whole range. We summarize these results in Table 2. For all columns, a lower value is better. We see that for ‘a1a’, fully Bayesian methods perform slightly better than MAP. More importantly, the Proximal method is faster than the Cholesky method but obtains the same error and marginal likelihood estimate. For the Proximal method, we use updates of (17) and (16) because D ≪N, but even in this scenario, the Cholesky method is slow due to expensive line-search for a large number of parameters. For the ‘Colon’ dataset, we use the update (19) for the Proximal method. We do not compare to the Cholesky method because it is too slow for the large datasets. In Table 2, we see that, our implementation is as fast as the Mean-Field method but performs significantly better. Overall, with the Proximal method, we achieve the same results as the Cholesky method but take less time. In some cases, we can also match the running time of the Mean-Field method. Note that the Mean-Field method does not give bad predictions and the minimum value of log-loss are comparable to our approach. However, as Neg-Log-Lik values for the Mean-Field method are inaccurate, it ends up choosing a bad hyperparameter value. This is expected as the Mean-Field method makes an extreme approximation. Therefore, cross-validation is more appropriate for the Mean-Field method. Gaussian process classification and regression: We compare the Proximal method to expectation propagation (EP) and Laplace approximation. We use the GPML toolbox for this comparison. We used a squared-exponential kernel for the Gaussian process with two scale parameters σ and l (as defined in GPML toolbox). We do a grid search over these hyperparameters. The grid values are given in Table 1. We report the log-loss and running time for each method. The left plot in Figure 1 shows the log-loss for GP classification on USPS 3vs5 dataset, where the Proximal method shows very similar behaviour to EP. These results are summarized in Table 3. We see that our method performs similar to EP, sometimes a bit better. The running times of EP and the Proximal method are also comparable. The advantage of our approach is that it is easier to implement compared to EP and it is also numerically robust. The predictive probabilities obtained with EP and the Proximal method for ’USPS 3vs5’ dataset are shown in the right plot of Figure 1. The horizontal axis shows the test examples in an ascending order; the examples are sorted according to their predictive probabilities obtained with EP. The probabilities themselves are shown in the y-axis. A higher value implies a better performance, therefore the Proximal method gives 7 0.1 0.1 0.2 0.2 0.4 0.4 0.6 0.6 Laplace-usps log(s) 0 2 4 6 log(sigma) 0 2 4 6 0.5 0.5 0.5 1 Laplace-usps log(s) 0 2 4 6 log(sigma) 0 2 4 6 0.07 0.1 0.1 0.2 0.2 0.4 0.4 0.6 0.6 EP-usps log(s) 0 2 4 6 0 2 4 6 10 10 10 15 15 20 20 30 30 30 30 EP-usps log(s) 0 2 4 6 0 2 4 6 0.07 0.07 0.1 0.1 0.2 0.2 0.4 0.4 0.6 0.6 Prox-usps log(s) 0 2 4 6 0 2 4 6 5 5 10 10 15 15 20 20 30 30 40 40 50 Prox-usps log(s) 0 2 4 6 0 2 4 6 Test Examples 0 50 100 150 200 250 300 Predictive Prob 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 EP vs Proximal EP Proximal Figure 1: In the left figure, the top row shows the log-loss and the bottom row shows the running time in seconds for the ‘USPS 3vs5’ dataset. In each plot, the minimum value of the log-loss is shown with a black circle. The right figure shows the predictive probabilities obtained with EP and the Proximal method. The horizontal axis shows the test examples in an ascending order; the examples are sorted according to their predictive probabilities obtained with EP. The probabilities themselves are shown in the y-axis. A higher value implies a better performance, therefore the Proximal method gives estimates better than EP. Log Loss Time (s is sec, m is min, h is hr) Data Laplace EP Proximal Laplace EP Proximal Ionosphere .285 (.002) .234 (.002) .230 (.002) 10s (.3) 3.8m (.10) 3.6m (.10) Sonar .410 (.002) .341 (.003) .317 (.004) 4s (.01) 45s (.01) 63s (.13) USPS-3vs5 .101 (.002) .065 (.002) .055 (.003) 1m (.06) 1h (.06) 1h (.02) Housing 1.03 (.004) .300 (.006) .310 (.009) .36m (.00) 25m (.65) 61m (1.8) Triazines 1.35 (.006) 1.36 (.006) 1.35 (.006) 10s (.10) 8m (.04) 14m (.30) Space ga 1.01 (—) .767 (—) .742 (—) 2m (—) 5h (—) 11h (—) Table 3: Results for the GP classification using a logistic likelihood and the GP regression using a Laplace likelihood. For all rows, a lower value is better. estimates better than EP. The improvement in the performance is due to the numerical error in the likelihood implementation. For the Proximal method, we use the method of [23], which is quite accurate. Designing such accurate likelihood approximations for EP is challenging. 7 Discussion and Future Work In this paper, we have proposed a proximal framework that uses the KL proximal term to take the geometry of the posterior distribution into account. We established the equivalence between our proximal-point algorithm and natural-gradient methods. We proposed a proximal-gradient algorithm that exploits the structure of the bound to simplify the optimization. An important future direction is to apply stochastic approximations to approximate gradients. This extension is discussed in [21]. It is also important to design a line-search method to set the step sizes. In addition, our proximal framework can also be used for distributed optimization in variational inference [26, 11]. Acknowledgments Mohammad Emtiyaz Khan would like to thank Masashi Sugiyama and Akiko Takeda from University of Tokyo, Matthias Grossglauser and Vincent Etter from EPFL, and Hannes Nickisch from Philips Research (Hamburg) for useful discussions and feedback. Pierre Baqu´e was supported in part by the Swiss National Science Foundation, under the grant CRSII2-147693 ”Tracking in the Wild”. 8 References [1] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013. [2] Tim Salimans, David A Knowles, et al. Fixed-form variational posterior approximation through stochastic linear regression. Bayesian Analysis, 8(4):837–882, 2013. [3] Rajesh Ranganath, Sean Gerrish, and David M Blei. Black box variational inference. arXiv preprint arXiv:1401.0118, 2013. [4] Michalis Titsias and Miguel L´azaro-Gredilla. Doubly Stochastic Variational Bayes for Non-Conjugate Inference. In International Conference on Machine Learning, 2014. [5] Masa-Aki Sato. Online model selection based on the variational Bayes. Neural Computation, 13(7):1649– 1681, 2001. [6] A. Honkela, T. Raiko, M. Kuusela, M. Tornio, and J. Karhunen. Approximate Riemannian conjugate gradient learning for fixed-form variational Bayes. The Journal of Machine Learning Research, 11:3235– 3268, 2011. [7] St´ephane Chr´etien and Alfred OIII Hero. Kullback proximal algorithms for maximum-likelihood estimation. Information Theory, IEEE Transactions on, 46(5):1800–1810, 2000. [8] Paul Tseng. An analysis of the EM algorithm and entropy-like proximal point methods. Mathematics of Operations Research, 29(1):27–44, 2004. [9] M. Teboulle. Convergence of proximal-like algorithms. SIAM Jon Optimization, 7(4):1069–1083, 1997. [10] Pradeep Ravikumar, Alekh Agarwal, and Martin J Wainwright. Message-passing for graph-structured linear programs: Proximal projections, convergence and rounding schemes. In International Conference on Machine Learning, 2008. [11] Behnam Babagholami-Mohamadabadi, Sejong Yoon, and Vladimir Pavlovic. D-MFVI: Distributed mean field variational inference using Bregman ADMM. arXiv preprint arXiv:1507.00824, 2015. [12] Bo Dai, Niao He, Hanjun Dai, and Le Song. Scalable Bayesian inference via particle mirror descent. Computing Research Repository, abs/1506.03101, 2015. [13] Lucas Theis and Matthew D Hoffman. A trust-region method for stochastic variational inference with applications to streaming data. International Conference on Machine Learning, 2015. [14] Razvan Pascanu and Yoshua Bengio. Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584, 2013. [15] Ulrich Paquet. On the convergence of stochastic variational inference in bayesian networks. NIPS Workshop on variational inference, 2014. [16] Nicholas G Polson, James G Scott, and Brandon T Willard. Proximal algorithms in statistics and machine learning. arXiv preprint arXiv:1502.03175, 2015. [17] Harri Lappalainen and Antti Honkela. Bayesian non-linear independent component analysis by multilayer perceptrons. In Advances in independent component analysis, pages 93–121. Springer, 2000. [18] Chong Wang and David M. Blei. Variational inference in nonconjugate models. J. Mach. Learn. Res., 14(1):1005–1031, April 2013. [19] M. Seeger and H. Nickisch. Large scale Bayesian inference and experimental design for sparse linear models. SIAM Journal of Imaging Sciences, 4(1):166–199, 2011. [20] Antti Honkela and Harri Valpola. Unsupervised variational Bayesian learning of nonlinear models. In Advances in neural information processing systems, pages 593–600, 2004. [21] Mohammad Emtiyaz Khan, Reza Babanezhad, Wu Lin, Mark Schmidt, and Masashi Sugiyama. Convergence of Proximal-Gradient Stochastic Variational Inference under Non-Decreasing Step-Size Sequence. arXiv preprint arXiv:1511.00146, 2015. [22] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006. [23] B. Marlin, M. Khan, and K. Murphy. Piecewise bounds for estimating Bernoulli-logistic latent Gaussian models. In International Conference on Machine Learning, 2011. [24] Mohammad Emtiyaz Khan. Decoupled Variational Inference. In Advances in Neural Information Processing Systems, 2014. [25] E. Challis and D. Barber. Concave Gaussian variational approximations for inference in large-scale Bayesian linear models. In International conference on Artificial Intelligence and Statistics, 2011. [26] Huahua Wang and Arindam Banerjee. Bregman alternating direction method of multipliers. In Advances in Neural Information Processing Systems, 2014. 9 | 2015 | 85 |
5,978 | A Convergent Gradient Descent Algorithm for Rank Minimization and Semidefinite Programming from Random Linear Measurements Qinqing Zheng University of Chicago qinqing@cs.uchicago.edu John Lafferty University of Chicago lafferty@galton.uchicago.edu Abstract We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r3κ2n log n) random measurements of a positive semidefinite n×n matrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum. 1 Introduction Semidefinite programming has become a key optimization tool in many areas of applied mathematics, signal processing and machine learning. SDPs often arise naturally from the problem structure, or are derived as surrogate optimizations that are relaxations of difficult combinatorial problems [7, 1, 8]. In spite of the importance of SDPs in principle—promising efficient algorithms with polynomial runtime guarantees—it is widely recognized that current optimization algorithms based on interior point methods can handle only relatively small problems. Thus, a considerable gap exists between the theory and applicability of SDP formulations. Scalable algorithms for semidefinite programming, and closely related families of nonconvex programs more generally, are greatly needed. A parallel development is the surprising effectiveness of simple classical procedures such as gradient descent for large scale problems, as explored in the recent machine learning literature. In many areas of machine learning and signal processing such as classification, deep learning, and phase retrieval, gradient descent methods, in particular first order stochastic optimization, have led to remarkably efficient algorithms that can attack very large scale problems [3, 2, 10, 6]. In this paper we build on this work to develop first-order algorithms for solving the rank minimization problem under random measurements and a closely related family of semidefinite programs. Our algorithms are efficient and scalable, and we prove that they attain linear convergence to the global optimum under natural assumptions. The affine rank minimization problem is to find a matrix X⋆∈Rn×p of minimum rank satisfying constraints A(X⋆) = b, where A : Rn×p −→Rm is an affine transformation. The underdetermined case where m ≪np is of particular interest, and can be formulated as the optimization min X∈Rn×p rank(X) subject to A(X) = b. (1) This problem is a direct generalization of compressed sensing, and subsumes many machine learning problems such as image compression, low rank matrix completion and low-dimensional metric embedding [18, 12]. While the problem is natural and has many applications, the optimization is nonconvex and challenging to solve. Without conditions on the transformation A or the minimum rank solution X⋆, it is generally NP hard [15]. 1 Existing methods, such as nuclear norm relaxation [18], singular value projection (SVP) [11], and alternating least squares (AltMinSense) [12], assume that a certain restricted isometry property (RIP) holds for A. In the random measurement setting, this essentially means that at least O(r(n + p) log(n + p)) measurements are available, where r = rank(X⋆) [18]. In this work, we assume that (i) X⋆is positive semidefinite and (ii) A : Rn×n −→Rm is defined as A(X)i = tr(AiX), where each Ai is a random n × n symmetric matrix from the Gaussian Orthogonal Ensemble (GOE), with (Ai)jj ∼N(0, 2) and (Ai)jk ∼N(0, 1) for j ̸= k. Our goal is thus to solve the optimization min X⪰0 rank(X) subject to tr(AiX) = bi, i = 1, . . . , m. (2) In addition to the wide applicability of affine rank minimization, the problem is also closely connected to a class of semidefinite programs. In Section 2, we show that the minimizer of a particular class of SDP can be obtained by a linear transformation of X⋆. Thus, efficient algorithms for problem (2) can be applied in this setting as well. Noting that a rank-r solution X⋆to (2) can be decomposed as X⋆= Z⋆Z⋆⊤where Z⋆∈Rn×r, our approach is based on minimizing the squared residual f(Z) = 1 4m
A(ZZ⊤) −b
2 = 1 4m m X i=1 tr(Z⊤AiZ) −bi 2 . While this is a nonconvex function, we take motivation from recent work for phase retrieval by Cand`es et al. [6], and develop a gradient descent algorithm for optimizing f(Z), using a carefully constructed initialization and step size. Our main contributions concerning this algorithm are as follows. • We prove that with O(r3n log n) constraints our gradient descent scheme can exactly recover X⋆with high probability. Empirical experiments show that this bound may potentially be improved to O(rn log n). • We show that our method converges linearly, and has lower computational cost compared with previous methods. • We carry out a detailed comparison of rank minimization algorithms, and demonstrate that when the measurement matrices Ai are sparse, our gradient method significantly outperforms alternative approaches. In Section 3 we briefly review related work. In Section 4 we discuss the gradient scheme in detail. Our main analytical results are presented in Section 5, with detailed proofs contained in the supplementary material. Our experimental results are presented in Section 6, and we conclude with a brief discussion of future work in Section 7. 2 Semidefinite Programming and Rank Minimization Before reviewing related work and presenting our algorithm, we pause to explain the connection between semidefinite programming and rank minimization. This connection enables our scalable gradient descent algorithm to be applied and analyzed for certain classes of SDPs. Consider a standard form semidefinite program min e X⪰0 tr( eC e X) subject to tr( eAi e X) = bi, i = 1, . . . , m (3) where eC, eA1, . . . , eAm ∈Sn. If eC is positive definite, then we can write eC = LL⊤where L ∈Rn×n is invertible. It follows that the minimum of problem (3) is the same as min X⪰0 tr(X) subject to tr(AiX) = bi, i = 1, . . . , m (4) 2 where Ai = L−1 eAiL−1⊤. In particular, minimizers e X∗of (3) are obtained from minimizers X∗of (4) via the transformation e X∗= L−1⊤X∗L−1. Since X is positive semidefinite, tr(X) is equal to ∥X∥∗. Hence, problem (4) is the nuclear norm relaxation of problem (2). Next, we characterize the specific cases where X∗= X⋆, so that the SDP and rank minimization solutions coincide. The following result is from Recht et al. [18]. Theorem 1. Let A : Rn×n −→Rm be a linear map. For every integer k with 1 ≤k ≤n, define the k-restricted isometry constant to be the smallest value δk such that (1 −δk) ∥X∥F ≤∥A(X)∥≤(1 + δk) ∥X∥F holds for any matrix X of rank at most k. Suppose that there exists a rank r matrix X⋆such that A(X⋆) = b. If δ2r < 1, then X⋆is the only matrix of rank at most r satisfying A(X) = b. Furthermore, if δ5r < 1/10, then X⋆can be attained by minimizing ∥X∥∗over the affine subset. In other words, since δ2r ≤δ5r, if δ5r < 1/10 holds for the transformation A and one finds a matrix X of rank r satisfying the affine constraint, then X must be positive semidefinite. Hence, one can ignore the semidefinite constraint X ⪰0 when solving the rank minimization (2). The resulting problem then can be exactly solved by nuclear norm relaxation. Since the minimum rank solution is positive semidefinite, it then coincides with the solution of the SDP (4), which is a constrained nuclear norm optimization. The observation that one can ignore the semidefinite constraint justifies our experimental comparison with methods such as nuclear norm relaxation, SVP, and AltMinSense, described in the following section. 3 Related Work Burer and Monteiro [4] proposed a general approach for solving semidefinite programs using factored, nonconvex optimization, giving mostly experimental support for the convergence of the algorithms. The first nontrivial guarantee for solving affine rank minimization problem is given by Recht et al. [18], based on replacing the rank function by the convex surrogate nuclear norm, as already mentioned in the previous section. While this is a convex problem, solving it in practice is nontrivial, and a variety of methods have been developed for efficient nuclear norm minimization. The most popular algorithms are proximal methods that perform singular value thresholding [5] at every iteration. While effective for small problem instances, the computational expense of the SVD prevents the method from being useful for large scale problems. Recently, Jain et al. [11] proposed a projected gradient descent algorithm SVP (Singular Value Projection) that solves min X∈Rn×p ∥A(X) −b∥2 subject to rank(X) ≤r, where ∥·∥is the ℓ2 vector norm and r is the input rank. In the (t+1)th iteration, SVP updates Xt+1 as the best rank r approximation to the gradient update Xt −µA⊤(A(Xt) −b), which is constructed from the SVD. If rank(X⋆) = r, then SVP can recover X⋆under a similar RIP condition as the nuclear norm heuristic, and enjoys a linear numerical rate of convergence. Yet SVP suffers from the expensive per-iteration SVD for large problem instances. Subsequent work of Jain et al. [12] proposes an alternating least squares algorithm AltMinSense that avoids the per-iteration SVD. AltMinSense factorizes X into two factors U ∈Rn×r, V ∈ Rp×r such that X = UV ⊤and minimizes the squared residual
A(UV ⊤) −b
2 by updating U and V alternately. Each update is a least squares problem. The authors show that the iterates obtained by AltMinSense converge to X⋆linearly under a RIP condition. However, the least squares problems are often ill-conditioned, it is difficult to observe AltMinSense converging to X⋆in practice. As described above, considerable progress has been made on algorithms for rank minimization and certain semidefinite programming problems. Yet truly efficient, scalable and provably convergent 3 algorithms have not yet been obtained. In the specific setting that X⋆is positive semidefinite, our algorithm exploits this structure to achieve these goals. We note that recent and independent work of Tu et al. [21] proposes a hybrid algorithm called Procrustes Flow (PF), which uses a few iterations of SVP as initialization, and then applies gradient descent. 4 A Gradient Descent Algorithm for Rank Minimization Our method is described in Algorithm 1. It is parallel to the Wirtinger Flow (WF) algorithm for phase retrieval [6], to recover a complex vector x ∈Cn given the squared magnitudes of its linear measurements bi = |⟨ai, x⟩|2, i ∈[m], where a1, . . . , am ∈Cn. Cand`es et al. [6] propose a first-order method to minimize the sum of squared residuals fWF(z) = n X i=1 |⟨ai, z⟩|2 −bi 2 . (5) The authors establish the convergence of WF to the global optimum—given sufficient measurements, the iterates of WF converge linearly to x up to a global phase, with high probability. If z and the ais are real-valued, the function fWF(z) can be expressed as fWF(z) = n X i=1 z⊤aia⊤ i z −x⊤aia⊤ i x 2 , which is a special case of f(Z) where Ai = aia⊤ i and each of Z and X⋆are rank one. See Figure 1a for an illustration; Figure 1b shows the convergence rate of our method. Our methods and results are thus generalizations of Wirtinger flow for phase retrieval. Before turning to the presentation of our technical results in the following section, we present some intuition and remarks about how and why this algorithm works. For simplicity, let us assume that the rank is specified correctly. Initialization is of course crucial in nonconvex optimization, as many local minima may be present. To obtain a sufficiently accurate initialization, we use a spectral method, similar to those used in [17, 6]. The starting point is the observation that a linear combination of the constraint values and matrices yields an unbiased estimate of the solution. Lemma 1. Let M = 1 m Pm i=1 biAi. Then 1 2E(M) = X⋆, where the expectation is with respect to the randomness in the measurement matrices Ai. Based on this fact, let X⋆= U ⋆ΣU ⋆⊤be the eigenvalue decomposition of X⋆, where U ⋆= [u⋆ 1, . . . , u⋆ r] and Σ = diag(σ1, . . . , σr) such that σ1 ≥. . . ≥σr are the nonzero eigenvalues of X⋆. Let Z⋆= U ⋆Σ 1 2 . Clearly, u⋆ s = z⋆ s/ ∥z⋆ s∥is the top sth eigenvector of E(M) associated with eigenvalue 2 ∥z⋆ s∥2. Therefore, we initialize according to z0 s = q |λs| 2 vs where (vs, λs) is the top sth eigenpair of M. For sufficiently large m, it is reasonable to expect that Z0 is close to Z⋆; this is confirmed by concentration of measure arguments. Certain key properties of f(Z) will be seen to yield a linear rate of convergence. In the analysis of convex functions, Nesterov [16] shows that for unconstrained optimization, the gradient descent scheme with sufficiently small step size will converge linearly to the optimum if the objective function is strongly convex and has a Lipschitz continuous gradient. However, these two properties are global and do not hold for our objective function f(Z). Nevertheless, we expect that similar conditions hold for the local area near Z⋆. If so, then if we start close enough to Z⋆, we can achieve the global optimum. In our subsequent analysis, we establish the convergence of Algorithm 1 with a constant step size of the form µ/ ∥Z⋆∥2 F , where µ is a small constant. Since ∥Z⋆∥F is unknown, we replace it by
Z0
F . 5 Convergence Analysis In this section we present our main result analyzing the gradient descent algorithm, and give a sketch of the proof. To begin, note that the symmetric decomposition of X⋆is not unique, since 4 2 Z2 0 -2 2 0 Z1 -2 103 102 101 100 10-1 f(Z) (a) iteration 0 200 400 600 800 dist(Z,Z⋆) ∥Z⋆∥F 10-15 10-10 10-5 100 (b) Figure 1: (a) An instance of f(Z) where X⋆∈R2×2 is rank-1 and Z ∈R2. The underlying truth is Z⋆= [1, 1]⊤. Both Z⋆and −Z⋆are minimizers. (b) Linear convergence of the gradient scheme, for n = 200, m = 1000 and r = 2. The distance metric is given in Definition 1. Algorithm 1: Gradient descent for rank minimization input: {Ai, bi}m i=1, r, µ initialization Set (v1, λ1), . . . , (vr, λr) to the top r eigenpairs of 1 m Pm i=1 biAi s.t. |λ1| ≥· · · ≥|λr| Z0 = [z0 1, . . . , z0 r] where z0 s = q |λs| 2 · vs, s ∈[r] k ←0 repeat ∇f(Zk) = 1 m m P i=1 tr(Zk⊤AiZk) −bi AiZk Zk+1 = Zk − µ Pr s=1 |λs|/2∇f(Zk) k ←k + 1 until convergence; output: b X = ZkZk⊤ X⋆= (Z⋆U)(Z⋆U)⊤for any r × r orthonormal matrix U. Thus, the solution set is S = n eZ ∈Rn×r | eZ = Z⋆U for some U with UU ⊤= U ⊤U = I o . Note that ∥eZ∥2 F = ∥X⋆∥∗for any eZ ∈S. We define the distance to the optimal solution in terms of this set. Definition 1. Define the distance between Z and Z⋆as d(Z, Z⋆) = min UU ⊤=U ⊤U=I ∥Z −Z⋆U∥F = min e Z∈S
Z −eZ
F . Our main result for exact recovery is stated below, assuming that the rank is correctly specified. Since the true rank is typically unknown in practice, one can start from a very low rank and gradually increase it. Theorem 2. Let the condition number κ = σ1/σr denote the ratio of the largest to the smallest nonzero eigenvalues of X⋆. There exists a universal constant c0 such that if m ≥c0κ2r3n log n, with high probability the initialization Z0 satisfies d(Z0, Z⋆) ≤ q 3 16σr. (6) Moreover, there exists a universal constant c1 such that when using constant step size µ/ ∥Z⋆∥2 F with µ ≤c1 κn and initial value Z0 obeying (6), the kth step of Algorithm 1 satisfies d(Zk, Z⋆) ≤ q 3 16σr 1 − µ 12κr k/2 with high probability. 5 We now outline the proof, giving full details in the supplementary material. The proof has four main steps. The first step is to give a regularity condition under which the algorithm converges linearly if we start close enough to Z⋆. This provides a local regularity property that is similar to the Nesterov [16] criteria that the objective function is strongly convex and has a Lipschitz continuous gradient. Definition 2. Let Z = arg min e Z∈S
Z −eZ
F denote the matrix closest to Z in the solution set. We say that f satisfies the regularity condition RC(ε, α, β) if there exist constants α, β such that for any Z satisfying d(Z, Z⋆) ≤ε, we have ⟨∇f(Z), Z −Z⟩≥1 ασr
Z −Z
2 F + 1 β ∥Z⋆∥2 F ∥∇f(Z)∥2 F . Using this regularity condition, we show that the iterative step of the algorithm moves closer to the optimum, if the current iterate is sufficiently close. Theorem 3. Consider the update Zk+1 = Zk − µ ∥Z⋆∥2 F ∇f(Zk). If f satisfies RC(ε, α, β), d(Zk, Z⋆) ≤ε, and 0 < µ < min(α/2, 2/β), then d(Zk+1, Z⋆) ≤ r 1 −2µ ακrd(Zk, Z⋆). In the next step of the proof, we condition on two events that will be shown to hold with high probability using concentration results. Let δ denote a small value to be specified later. A1 For any u ∈Rn such that ∥u∥≤√σ1,
1 m m P i=1 (u⊤Aiu)Ai −2uu⊤
≤δ r. A2 For any eZ ∈S,
∂2f( eZ) ∂ezs∂ez⊤ k −E " ∂2f( eZ) ∂ezs∂ez⊤ k #
≤δ r, for all s, k ∈[r]. Here the expectations are with respect to the random measurement matrices. Under these assumptions, we can show that the objective satisfies the regularity condition with high probability. Theorem 4. Suppose that A1 and A2 hold. If δ ≤ 1 16σr, then f satisfies the regularity condition RC( q 3 16σr, 24, 513κn) with probability at least 1−mCe−ρn, where C, ρ are universal constants. Next we show that under A1, a good initialization can be found. Theorem 5. Suppose that A1 holds. Let {vs, λs}r s=1 be the top r eigenpairs of M = 1 m m P i=1 biAi such that |λ1| ≥· · · ≥|λr|. Let Z0 = [z1, . . . , zr] where zs = q |λs| 2 · vs, s ∈[r]. If δ ≤ σr 4√r, then d(Z0, Z⋆) ≤ p 3σr/16. Finally, we show that conditioning on A1 and A2 is valid since these events have high probability as long as m is sufficiently large. Theorem 6. If the number of samples m ≥ 42 min(δ2/r2σ2 1, δ/rσ1)n log n, then for any u ∈Rn satisfying ∥u∥≤√σ1,
1 m m X i=1 (u⊤Aiu)Ai −2uu⊤
≤δ r holds with probability at least 1 −mCe−ρn − 2 n2 , where C and ρ are universal constants. Theorem 7. For any x ∈Rn, if m ≥ 128 min(δ2/4r2σ2 1, δ/2rσ1)n log n, then for any eZ ∈S
∂2f( eZ) ∂ezs∂ez⊤ k −E " ∂2f( eZ) ∂ezs∂ez⊤ k #
≤δ r, for all s, k ∈[r], with probability at least 1 −6me−n − 4 n2 . 6 Note that since we need δ ≤min 1 16, 1 4√r σr, we have δ rσ1 ≤1, and the number of measurements required by our algorithm scales as O(r3κ2n log n), while only O(r2κ2n log n) samples are required by the regularity condition. We conjecture this bound could be further improved to be O(rn log n); this is supported by the experimental results presented below. Recently, Tu et al. [21] establish a tighter O(r2κ2n) bound overall. Specifically, when only one SVP step is used in preprocessing, the initialization of PF is also the spectral decomposition of 1 2M. The authors show that O(r2κ2n) measurements are sufficient for Z0 to satisfy d(Z0, Z⋆) ≤O(√σr) with high probability, and demonstrate an O(rn) sample complexity for the regularity condition. 6 Experiments In this section we report the results of experiments on synthetic datasets. We compare our gradient descent algorithm with nuclear norm relaxation, SVP and AltMinSense for which we drop the positive semidefiniteness constraint, as justified by the observation in Section 2. We use ADMM for the nuclear norm minimization, based on the algorithm for the mixture approach in Tomioka et al. [19]; see Appendix G. For simplicity, we assume that AltMinSense, SVP and the gradient scheme know the true rank. Krylov subspace techniques such as the Lanczos method could be used compute the partial eigendecomposition; we use the randomized algorithm of Halko et al. [9] to compute the low rank SVD. All methods are implemented in MATLAB and the experiments were run on a MacBook Pro with a 2.5GHz Intel Core i7 processor and 16 GB memory. 6.1 Computational Complexity It is instructive to compare the per-iteration cost of the different approaches; see Table 1. Suppose that the density (fraction of nonzero entries) of each Ai is ρ. For AltMinSense, the cost of solving the least squares problem is O(mn2r2 + n3r3 + mn2rρ). The other three methods have O(mn2ρ) cost to compute the affine transformation. For the nuclear norm approach, the O(n3) cost is from the SVD and the O(m2) cost is due to the update of the dual variables. The gradient scheme requires 2n2r operations to compute ZkZk⊤and to multiply Zk by n×n matrix to obtain the gradient. SVP needs O(n2r) operations to compute the top r singular vectors. However, in practice this partial SVD is more expensive than the 2n2r cost required for the matrix multiplies in the gradient scheme. Method Complexity nuclear norm minimization via ADMM O(mn2ρ + m2 + n3) gradient descent O(mn2ρ) + 2n2r SVP O(mn2ρ + n2r) AltMinSense O(mn2r2 + n3r3 + mn2rρ) Table 1: Per-iteration computational complexities of different methods. Clearly, AltMinSense is the least efficient. For the other approaches, in the dense case (ρ large), the affine transformation dominates the computation. Our method removes the overhead caused by the SVD. In the sparse case (ρ small), the other parts dominate and our method enjoys a low cost. 6.2 Runtime Comparison We conduct experiments for both dense and sparse measurement matrices. AltMinSense is indeed slow, so we do not include it here. In the first scenario, we randomly generate a 400×400 rank-2 matrix X⋆= xx⊤+yy⊤where x, y ∼ N(0, I). We also generate m = 6n matrices A1, . . . , Am from the GOE, and then take b = A(X⋆). We report the relative error measured in the Frobenius norm defined as ∥b X −X⋆∥F /∥X⋆∥F . For the nuclear norm approach, we set the regularization parameter to λ = 10−5. We test three values η = 10, 100, 200 for the penalty parameter and select η = 100 as it leads to the fastest convergence. Similarly, for SVP we evaluate the three values 5×10−5, 10−4, 2×10−4 for the step size, and select 10−4 as the largest for which SVP converges. For our approach, we test the three values 0.6, 0.8, 1.0 for µ and select 0.8 in the same way. 7 time (seconds) 101 102 103 ∥b X−X⋆∥F ∥X⋆∥F 10-14 10-12 10-10 10-8 10-6 10-4 10-2 100 nuclear norm SVP gradient descent (a) time (seconds) 100 101 102 ∥b X−X⋆∥F ∥X⋆∥F 10-12 10-10 10-8 10-6 10-4 10-2 100 (b) m/n 1 2 3 4 5 probability of successful recovery 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 rank=1 n=60 gradient SVP nuclear rank=2 n=60 gradient SVP nuclear rank=1 n=100 gradient SVP nuclear rank=2 n=100 gradient SVP nuclear (c) Figure 2: (a) Runtime comparison where X⋆∈R400×400 is rank-2 and Ais are dense. (b) Runtime comparison where X⋆∈R600×600 is rank-2 and Ais are sparse. (c) Sample complexity comparison. In the second scenario, we use a more general and practical setting. We randomly generate a rank-2 matrix X⋆∈R600×600 as before. We generate m = 7n sparse Ais whose entries are i.i.d. Bernoulli: (Ai)jk = 1 with probability ρ, 0 with probability 1 −ρ, where we use ρ = 0.001. For all the methods we use the same strategies as before to select parameters. For the nuclear norm approach, we try three values η = 10, 100, 200 and select η = 100. For SVP, we test the three values 5 × 10−3, 2 × 10−3, 10−3 for the step size and select 10−3. For the gradient algorithm, we check the three values 0.8, 1, 1.5 for µ and choose 1. The results are shown in Figures 2a and 2b. In the dense case, our method is faster than the nuclear norm approach and slightly outperforms SVP. In the sparse case, it is significantly faster than the other approaches. 6.3 Sample Complexity We also evaluate the number of measurements required by each method to exactly recover X⋆, which we refer to as the sample complexity. We randomly generate the true matrix X⋆∈Rn×n and compute the solutions of each method given m measurements, where the Ais are randomly drawn from the GOE. A solution with relative error below 10−5 is considered to be successful. We run 40 trials and compute the empirical probability of successful recovery. We consider cases where n = 60 or 100 and X⋆is of rank one or two. The results are shown in Figure 2c. For SVP and our approach, the phase transitions happen around m = 1.5n when X⋆is rank-1 and m = 2.5n when X⋆is rank-2. This scaling is close to the number of degrees of freedom in each case; this confirms that the sample complexity scales linearly with the rank r. The phase transition for the nuclear norm approach occurs later. The results suggest that the sample complexity of our method should also scale as O(rn log n) as for SVP and the nuclear norm approach [11, 18]. 7 Conclusion We connect a special case of affine rank minimization to a class of semidefinite programs with random constraints. Building on a recently proposed first-order algorithm for phase retrieval [6], we develop a gradient descent procedure for rank minimization and establish convergence to the optimal solution with O(r3n log n) measurements. We conjecture that O(rn log n) measurements are sufficient for the method to converge, and that the conditions on the sampling matrices Ai can be significantly weakened. More broadly, the technique used in this paper—factoring the semidefinite matrix variable, recasting the convex optimization as a nonconvex optimization, and applying firstorder algorithms—first proposed by Burer and Monteiro [4], may be effective for a much wider class of SDPs, and deserves further study. Acknowledgements Research supported in part by NSF grant IIS-1116730 and ONR grant N00014-12-1-0762. 8 References [1] Arash A. Amini and Martin J. Wainwright. High-dimensional analysis of semidefinite relaxations for sparse principal components. The Annals of Statistics, 37(5):2877–2921, 2009. [2] Francis Bach. Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression. The Journal of Machine Learning Research, 15(1):595–627, 2014. [3] Francis Bach and Eric Moulines. Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In Advances in Neural Information Processing Systems (NIPS), 2011. [4] Samuel Burer and Renato DC Monteiro. A nonlinear programming algorithm for solving semidefinite programs via low-rank factorization. Mathematical Programming, 95(2):329– 357, 2003. [5] Jian-Feng Cai, Emmanuel J Cand`es, and Zuowei Shen. A singular value thresholding algorithm for matrix completion. SIAM Journal on Optimization, 20(4):1956–1982, 2010. [6] Emmanuel Cand`es, Xiaodong Li, and Mahdi Soltanolkotabi. Phase retrieval via wirtinger flow: Theory and algorithms. arXiv preprint arXiv:1407.1065, 2014. [7] A. d’Aspremont, L. El Ghaoui, M. I. Jordan, and G. Lanckriet. A direct formulation for sparse PCA using semidefinite programming. In S. Thrun, L. Saul, and B. Schoelkopf (Eds.), Advances in Neural Information Processing Systems (NIPS), 2004. [8] Michel X. Goemans and David P. Williamson. Improved approximation algorithms for maximum cut and satisfiability problems using semidefinite programming. Journal of the ACM, 42 (6):1115–1145, November 1995. ISSN 0004-5411. [9] Nathan Halko, Per-Gunnar Martinsson, and Joel A Tropp. Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. SIAM review, 53(2):217–288, 2011. [10] Matt Hoffman, David M. Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14, 2013. [11] Prateek Jain, Raghu Meka, and Inderjit S Dhillon. Guaranteed rank minimization via singular value projection. In Advances in Neural Information Processing Systems, pages 937–945, 2010. [12] Prateek Jain, Praneeth Netrapalli, and Sujay Sanghavi. Low-rank matrix completion using alternating minimization. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing, pages 665–674. ACM, 2013. [13] Beatrice Laurent and Pascal Massart. Adaptive estimation of a quadratic functional by model selection. Annals of Statistics, pages 1302–1338, 2000. [14] Michel Ledoux and Brian Rider. Small deviations for beta ensembles. Electron. J. Probab., 15:no. 41, 1319–1343, 2010. ISSN 1083-6489. doi: 10.1214/EJP.v15-798. URL http: //ejp.ejpecp.org/article/view/798. [15] Raghu Meka, Prateek Jain, Constantine Caramanis, and Inderjit S Dhillon. Rank minimization via online learning. In Proceedings of the 25th International Conference on Machine learning, pages 656–663. ACM, 2008. [16] Yurii Nesterov. Introductory lectures on convex optimization, volume 87. Springer Science & Business Media, 2004. [17] Praneeth Netrapalli, Prateek Jain, and Sujay Sanghavi. Phase retrieval using alternating minimization. In Advances in Neural Information Processing Systems, pages 2796–2804, 2013. [18] Benjamin Recht, Maryam Fazel, and Pablo A Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. SIAM review, 52(3):471–501, 2010. [19] Ryota Tomioka, Kohei Hayashi, and Hisashi Kashima. Estimation of low-rank tensors via convex optimization. arXiv preprint arXiv:1010.0789, 2010. [20] Joel A Tropp. An introduction to matrix concentration inequalities. arXiv preprint arXiv:1501.01571, 2015. [21] Stephen Tu, Ross Boczar, Mahdi Soltanolkotabi, and Benjamin Recht. Low-rank solutions of linear matrix equations via procrustes flow. arXiv preprint arXiv:1507.03566, 2015. 9 | 2015 | 86 |
5,979 | On-the-Job Learning with Bayesian Decision Theory Keenon Werling Department of Computer Science Stanford University keenon@cs.stanford.edu Arun Chaganty Department of Computer Science Stanford University chaganty@cs.stanford.edu Percy Liang Department of Computer Science Stanford University pliang@cs.stanford.edu Christopher D. Manning Department of Computer Science Stanford University manning@cs.stanford.edu Abstract Our goal is to deploy a high-accuracy system starting with zero training examples. We consider an on-the-job setting, where as inputs arrive, we use real-time crowdsourcing to resolve uncertainty where needed and output our prediction when confident. As the model improves over time, the reliance on crowdsourcing queries decreases. We cast our setting as a stochastic game based on Bayesian decision theory, which allows us to balance latency, cost, and accuracy objectives in a principled way. Computing the optimal policy is intractable, so we develop an approximation based on Monte Carlo Tree Search. We tested our approach on three datasets—named-entity recognition, sentiment classification, and image classification. On the NER task we obtained more than an order of magnitude reduction in cost compared to full human annotation, while boosting performance relative to the expert provided labels. We also achieve a 8% F1 improvement over having a single human label the whole set, and a 28% F1 improvement over online learning. “Poor is the pupil who does not surpass his master.” – Leonardo da Vinci 1 Introduction There are two roads to an accurate AI system today: (i) gather a huge amount of labeled training data [1] and do supervised learning [2]; or (ii) use crowdsourcing to directly perform the task [3, 4]. However, both solutions require non-trivial amounts of time and money. In many situations, one wishes to build a new system — e.g., to do Twitter information extraction [5] to aid in disaster relief efforts or monitor public opinion — but one simply lacks the resources to follow either the pure ML or pure crowdsourcing road. In this paper, we propose a framework called on-the-job learning (formalizing and extending ideas first implemented in [6]), in which we produce high quality results from the start without requiring a trained model. When a new input arrives, the system can choose to asynchronously query the crowd on parts of the input it is uncertain about (e.g. query about the label of a single token in a sentence). After collecting enough evidence the system makes a prediction. The goal is to maintain high accuracy by initially using the crowd as a crutch, but gradually becoming more self-sufficient as the model improves. Online learning [7] and online active learning [8, 9, 10] are different in that they do not actively seek new information prior to making a prediction, and cannot maintain high accuracy independent of the number of data instances seen so far. Active classification [11], like us, 1 y1 x1 y2 x2 y3 x3 r1 y4 x4 y5 x5 Soup on George str. #Katrina RESOURCE LOCATION y1 x1 y2 x2 y3 x3 y4 x4 y5 x5 Soup on George “George” str. #Katrina 32%: LOCATION 2%: NONE 12%: RESOURCE 44%: PERSON Decide to ask a crowd worker in real time Get beliefs under learned model Incorporate feedback, return a prediction soup on george str, #katrina location person none resource What is `George` here? http://www.crowd-workers.com 1. 2. 3. Figure 1: Named entity recognition on tweets in on-the-job learning. strategically seeks information (by querying a subset of labels) prior to prediction, but it is based on a static policy, whereas we improve the model during test time based on observed data. To determine which queries to make, we model on-the-job learning as a stochastic game based on a CRF prediction model. We use Bayesian decision theory to tradeoff latency, cost, and accuracy in a principled manner. Our framework naturally gives rise to intuitive strategies: To achieve high accuracy, we should ask for redundant labels to offset the noisy responses. To achieve low latency, we should issue queries in parallel, whereas if latency is unimportant, we should issue queries sequentially in order to be more adaptive. Computing the optimal policy is intractable, so we develop an approximation based on Monte Carlo tree search [12] and progressive widening to reason about continuous time [13]. We implemented and evaluated our system on three different tasks: named-entity recognition, sentiment classification, and image classification. On the NER task we obtained more than an order of magnitude reduction in cost compared to full human annotation, while boosting performance relative to the expert provided labels. We also achieve a 8% F1 improvement over having a single human label the whole set, and a 28% F1 improvement over online learning. An open-source implementation of our system, dubbed LENSE for “Learning from Expensive Noisy Slow Experts” is available at http://www.github.com/keenon/lense. 2 Problem formulation Consider a structured prediction problem from input x = (x1, . . . , xn) to output y = (y1, . . . , yn). For example, for named-entity recognition (NER) on tweets, x is a sequence of words in the tweet (e.g., “on George str.”) and y is the corresponding sequence of labels (e.g., NONE LOCATION LOCATION). The full set of labels of PERSON, LOCATION, RESOURCE, and NONE. In the on-the-job learning setting, inputs arrive in a stream. On each input x, we make zero or more queries q1, q2, . . . on the crowd to obtain labels (potentially more than once) for any positions in x. The responses r1, r2, . . . come back asynchronously, which are incorporated into our current prediction model pθ. Figure 2 (left) shows one possible outcome: We query positions q1 = 2 (“George”) and q2 = 3 (“str.”). The first query returns r1 = LOCATION, upon which we make another query on the the same position q3 = 3 (“George”), and so on. When we have sufficient confidence about the entire output, we return the most likely prediction ˆy under the model. Each query qi is issued at time si and the response comes back at time ti. Assume that each query costs m cents. Our goal is to choose queries to maximize accuracy, minimize latency and cost. We make several remarks about this setting: First, we must make a prediction ˆy on each input x in the stream, unlike in active learning, where we are only interested in the pool or stream of examples for the purposes of building a good model. Second, the responses are used to update the prediction 2 1 S o G s S o G s S o G s q1 = 1 r1 = res q2 = 3 r2 = loc 2 S o G s S o G s S o G s S o G s q1 = 1 r1 = res q2 = 3 r2 = per q4 = 3 r4 = loc per loc res none Legend (a) Incorporating information from responses. The bar graphs represent the marginals over the labels for each token (indicated by the first character) at different points in time. The two timelines show how the system updates its confidence over labels based on the crowd’s responses. The system continues to issue queries until it has sufficient confidence on its labels. See the paragraph on behavior in Section 3 for more information. ∅W r1 = loc 0.47 ∅R 4 ∅W r1 = loc 0.27 ∅R = system = crowd σ = (tnow, q, s, r, t) (1, (3), (0), (∅), (∅)) (1.7, (3), (0), (1.7), (loc)) (b) Game tree. An example of a partial game tree constructed by the system when deciding which action to take in the state σ = (1, (3), (0), (∅), (∅)), i.e. the query q1 = 3 has already been issued and the system must decide whether to issue another query or wait for a response to q1. Figure 2: Example behavior while running structure prediction on the tweet “Soup on George str.” We omit the RESOURCE from the game tree for visual clarity. model, like in online learning. This allows the number of queries needed (and thus cost and latency) to decrease over time without compromising accuracy. 3 Model We model on-the-job learning as a stochastic game with two players: the system and the crowd. The game starts with the system receiving input x and ends when the system turns in a set of labels y = (y1, . . . , yn). During the system’s turn, the system may choose a query action q ∈{1, . . . , n} to ask the crowd to label yq. The system may also choose the wait action (q = ∅W ) to wait for the crowd to respond to a pending query or the return action (q = ∅R) to terminate the game and return its prediction given responses received thus far. The system can make as many queries in a row (i.e. simultaneously) as it wants, before deciding to wait or turn in.1 When the wait action is chosen, the turn switches to the crowd, which provides a response r to one pending query, and advances the game clock by the time taken for the crowd to respond. The turn then immediately reverts back to the system. When the game ends (the system chooses the return action), the system evaluates a utility that depends on the accuracy of its prediction, the number of queries issued and the total time taken. The system should choose query and wait actions to maximize the utility of the prediction eventually returned. In the rest of this section, we describe the details of the game tree, our choice of utility and specify models for crowd responses, followed by a brief exploration of behavior admitted by our model. Game tree. Let us now formalize the game tree in terms of its states, actions, transitions and rewards; see Figure 2b for an example. The game state σ = (tnow, q, s, r, t) consists of the current time tnow, the actions q = (q1, . . . , qk−1) that have been issued at times s = (s1, . . . , sk−1) and the responses r = (r1, . . . , rk−1) that have been received at times t = (t1, . . . , tk−1). Let rj = ∅and tj = ∅iff qj is not a query action or its responses have not been received by time tnow. During the system’s turn, when the system chooses an action qk, the state is updated to σ′ = (tnow, q′, s′, r′, t′), where q′ = (q1, . . . , qk), s′ = (s1, . . . , sk−1, tnow), r′ = (r1, . . . , rk−1, ∅) and t′ = (t1, . . . , tk−1, ∅). If qk ∈{1, . . . n}, then the system chooses another action from the new state σ′. If qk = ∅W , the crowd makes a stochastic move from σ′. Finally, if qk = ∅R, the game ends, 1 This rules out the possibility of launching a query midway through waiting for the next response. However, we feel like this is a reasonable limitation that significantly simplifies the search space. 3 and the system returns its best estimate of the labels using the responses it has received and obtains a utility U(σ) (defined later). Let F = {1 ≤j ≤k −1 | qj ̸= ∅W ∧rj = ∅} be the set of in-flight requests. During the crowd’s turn (i.e. after the system chooses ∅W ), the next response from the crowd, j∗∈F, is chosen: j∗= arg minj∈F t′ j where t′ j is sampled from the response-time model, t′ j ∼pT(t′ j | sj, t′ j > tnow), for each j ∈F. Finally, a response is sampled using a response model, r′ j∗∼p(r′ j∗| x, r), and the state is updated to σ′ = (tj∗, q, s, r′, t′), where r′ = (r1, . . . , r′ j∗, . . . , rk) and t′ = (t1, . . . , t′ j∗, . . . , tk). Utility. Under Bayesian decision theory, the optimal choice for an action in state σ = (tnow, q, r, s, t) is the one that attains the maximum expected utility (i.e. value) for the game starting at σ. Recall that the system can return at any time, at which point it receives a utility that trades off two things: The first is the accuracy of the MAP estimate according to the model’s best guess of y incorporating all responses received by time τ. The second is the cost of making queries: a (monetary) cost wM per query made and penalty of wT per unit of time taken. Formally, we define the utility to be: U(σ) ≜ExpAcc(p(y | x, q, s, r, t)) −(nQwM + tnowwT), (1) ExpAcc(p) = Ep(y)[Accuracy(arg max y′ p(y′))], (2) where nQ = |{j | qj ∈{1, . . . , n}| is the number of queries made, p(y | x, q, s, r, t) is a prediction model that incorporates the crowd’s responses. The utility of wait and return actions is computed by taking expectations over subsequent trajectories in the game tree. This is intractable to compute exactly, so we propose an approximate algorithm in Section 4. Environment model. The final component is a model of the environment (crowd). Given input x and queries q = (q1, . . . , qk) issued at times s = (s1, . . . , sk), we define a distribution over the output y, responses r = (r1, . . . , rk) and response times t = (t1, . . . , tk) as follows: p(y, r, t | x, q, s) ≜pθ(y | x) k Y i=1 pR(ri | yqi)pT(ti | si). (3) The three components are as follows: pθ(y | x) is the prediction model (e.g. a standard linear-chain CRF); pR(r | yq) is the response model which describes the distribution of the crowd’s response r for a given a query q when the true answer is yq; and pT(ti | si) specifies the latency of query qi. The CRF model pθ(y | x) is learned based on all actual responses (not simulated ones) using AdaGrad. To model annotation errors, we set pR(r | yq) = 0.7 iff r = yq,2 and distribute the remaining probability for r uniformly. Given this full model, we can compute p(r′ | x, r, q) simply by marginalizing out y and t from Equation 3. When conditioning on r, we ignore responses that have not yet been received (i.e. when rj = ∅for some j). Behavior. Let’s look at typical behavior that we expect the model and utility to capture. Figure 2a shows how the marginals over the labels change as the crowd provides responses for our running example, i.e. named entity recognition for the sentence “Soup on George str.”. In the both timelines, the system issues queries on “Soup” and “George” because it is not confident about its predictions for these tokens. In the first timeline, the crowd correctly responds that “Soup” is a resource and that “George” is a location. Integrating these responses, the system is also more confident about its prediction on “str.”, and turns in the correct sequence of labels. In the second timeline, a crowd worker makes an error and labels “George” to be a person. The system still has uncertainty on “George” and issues an additional query which receives a correct response, following which the system turns in the correct sequence of labels. While the answer is still correct, the system could have taken less time to respond by making an additional query on “George” at the very beginning. 2We found the humans we hired were roughly 70% accurate in our experiments 4 4 Game playing In Section 3 we modeled on-the-job learning as a stochastic game played between the system and the crowd. We now turn to the problem of actually finding a policy that maximizes the expected utility, which is, of course, intractable because of the large state space. Our algorithm (Algorithm 1) combines ideas from Monte Carlo tree search [12] to systematically explore the state space and progressive widening [13] to deal with the challenge of continuous variables (time). Some intuition about the algorithm is provided below. When simulating the system’s turn, the next state (and hence action) is chosen using the upper confidence tree (UCT) decision rule that trades off maximizing the value of the next state (exploitation) with the number of visits (exploration). The crowd’s turn is simulated based on transitions defined in Section 3. To handle the unbounded fanout during the crowd’s turn, we use progressive widening that maintains a current set of “active” or “explored” states, which is gradually grown with time. Let N(σ) be the number of times a state has been visited, and C(σ) be all successor states that the algorithm has sampled. Algorithm 1 Approximating expected utility with MCTS and progressive widening 1: For all σ, N(σ) ←0, V (σ) ←0, C(σ) ←[ ] ▷Initialize visits, utility sum, and children 2: function MONTECARLOVALUE(state σ) 3: increment N(σ) 4: if system’s turn then 5: σ′ ←arg maxσ′ n V (σ′) N(σ′) + c q log N(σ) N(σ′) o ▷Choose next state σ′ using UCT 6: v ←MONTECARLOVALUE(σ′) 7: V (σ) ←V (σ) + v ▷Record observed utility 8: return v 9: else if crowd’s turn then 10: if max(1, p N(σ)) ≤|C(σ)| then ▷Restrict continuous samples using PW 11: σ′ is sampled from set of already visited C(σ) based on (3) 12: else 13: σ′ is drawn based on (3) 14: C(σ) ←C(σ) ∪{[σ′]} 15: end if 16: return MONTECARLOVALUE(σ′) 17: else if game terminated then 18: return utility U of σ according to (1) 19: end if 20: end function 5 Experiments In this section, we empirically evaluate our approach on three tasks. While the on-the-job setting we propose is targeted at scenarios where there is no data to begin with, we use existing labeled datasets (Table 1) to have a gold standard. Baselines. We evaluated the following four methods on each dataset: 1. Human n-query: The majority vote of n human crowd workers was used as a prediction. 2. Online learning: Uses a classifier that trains on the gold output for all examples seen so far and then returns the MLE as a prediction. This is the best possible offline system: it sees perfect information about all the data seen so far, but can not query the crowd while making a prediction. 3. Threshold baseline: Uses the following heuristic: For each label, yi, we ask for m queries such that (1−pθ(yi | x))×0.3m ≥0.98. Instead of computing the expected marginals over the responses to queries in flight, we simply count the in-flight requests for a given variable, and reduces the uncertainty on that variable by a factor of 0.3. The system continues launching requests until the threshold (adjusted by number of queries in flight) is crossed. 5 Dataset (Examples) Task and notes Features NER (657) We evaluate on the CoNLL-2003 NER task3, a sequence labeling problem over English sentences. We only consider the four tags corresponding to persons, locations, organizations or none4. We used standard features [14]: the current word, current lemma, previous and next lemmas, lemmas in a window of size three to the left and right, word shape and word prefix and suffixes, as well as word embeddings. Sentiment (1800) We evaluate on a subset of the IMDB sentiment dataset [15] that consists of 2000 polar movie reviews; the goal is binary classification of documents into classes POS and NEG. We used two feature sets, the first (UNIGRAMS) containing only word unigrams, and the second (RNN) that also contains sentence vector embeddings from [16]. Face (1784) We evaluate on a celebrity face classification task [17]. Each image must be labeled as one of the following four choices: Andersen Cooper, Daniel Craig, Scarlet Johansson or Miley Cyrus. We used the last layer of a 11layer AlexNet [2] trained on ImageNet as input feature embeddings, though we leave back-propagating into the net to future work. Table 1: Datasets used in this paper and number of examples we evaluate on. Named Entity Recognition Face Identification System Delay/tok Qs/tok PER F1 LOC F1 ORG F1 F1 Latency Qs/ex Acc. 1-vote 467 ms 1.0 90.2 78.8 71.5 80.2 1216 ms 1.0 93.6 3-vote 750 ms 3.0 93.6 85.1 74.5 85.4 1782 ms 3.0 99.1 5-vote 1350 ms 5.0 95.5 87.7 78.7 87.3 2103 ms 5.0 99.8 Online n/a n/a 56.9 74.6 51.4 60.9 n/a n/a 79.9 Threshold 414 ms 0.61 95.2 89.8 79.8 88.3 1680 ms 2.66 93.5 LENSE 267 ms 0.45 95.2 89.7 81.7 88.8 1590 ms 2.37 99.2 Table 2: Results on NER and Face tasks comparing latencies, queries per token (Qs/tok) and performance metrics (F1 for NER and accuracy for Face). Predictions are made using MLE on the model given responses. The baseline does not reason about time and makes all its queries at the very beginning. 4. LENSE: Our full system as described in Section 3. Implementation and crowdsourcing setup. We implemented the retainer model of [18] on Amazon Mechanical Turk to create a “pool” of crowd workers that could respond to queries in real-time. The workers were given a short tutorial on each task before joining the pool to minimize systematic errors caused by misunderstanding the task. We paid workers $1.00 to join the retainer pool and an additional $0.01 per query (for NER, since response times were much faster, we paid $0.005 per query). Worker response times were generally in the range of 0.5–2 seconds for NER, 10–15 seconds for Sentiment, and 1–4 seconds for Faces. When running experiments, we found that the results varied based on the current worker quality. To control for variance in worker quality across our evaluations of the different methods, we collected 5 worker responses and their delays on each label ahead of time5. During simulation we sample the worker responses and delays without replacement from this frozen pool of worker responses. Summary of results. Table 2 and Table 3 summarize the performance of the methods on the three tasks. On all three datasets, we found that on-the-job learning outperforms machine and human-only 3http://www.cnts.ua.ac.be/conll2003/ner/ 4 The original also includes a fifth tag for miscellaneous, however the definition for miscellaneos is complex, making it very difficult for non-expert crowd workers to provide accurate labels. 5These datasets are available in the code repository for this paper 6 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 0 20 40 60 80 100 120 140 160 180 200 Queries per example Time Unigrams Unigrams + RNN embeddings Figure 3: Queries per example for LENSE on Sentiment. With simple UNIGRAM features, the model quickly learns it does not have the capacity to answer confidently and must query the crowd. With more complex RNN features, the model learns to be more confident and queries the crowd less over time. System Latency Qs/ex Acc. 1-vote 6.6 s 1.00 89.2 3-vote 10.9 s 3.00 95.8 5-vote 13.5 s 5.00 98.7 UNIGRAMS Online n/a n/a 78.1 Threshold 10.9 s 2.99 95.9 LENSE 11.7 s 3.48 98.6 RNN Online n/a n/a 85.0 Threshold 11.0 s 2.85 96.0 LENSE 11.0 s 3.19 98.6 Table 3: Results on the Sentiment task comparing latency, queries per example and accuracy. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 100 200 300 400 500 600 700 F1 Time LENSE online learning 0 0.2 0.4 0.6 0.8 1 1.2 1.4 0 100 200 300 400 500 600 700 Queries per token Time LENSE 1 vote baseline Figure 4: Comparing F1 and queries per token on the NER task over time. The left graph compares LENSE to online learning (which cannot query humans at test time). This highlights that LENSE maintains high F1 scores even with very small training set sizes, by falling back the crowd when it is unsure. The right graph compares query rate over time to 1-vote. This clearly shows that as the model learns, it needs to query the crowd less. comparisons on both quality and cost. On NER, we achieve an F1 of 88.4% at more than an order of magnitude reduction on the cost of achieving comporable quality result using the 5-vote approach. On Sentiment and Faces, we reduce costs for a comparable accuracy by a factor of around 2. For the latter two tasks, both on-the-job learning methods perform less well than in NER. We suspect this is due to the presence of a dominant class (“none”) in NER that the model can very quickly learn to expend almost no effort on. LENSE outperforms the threshold baseline, supporting the importance of Bayesian decision theory. Figure 4 tracks the performance and cost of LENSE over time on the NER task. LENSE is not only able to consistently outperform other baselines, but the cost of the system steadily reduces over time. On the NER task, we find that LENSE is able to trade off time to produce more accurate results than the 1-vote baseline with fewer queries by waiting for responses before making another query. While on-the-job learning allows us to deploy quickly and ensure good results, we would like to eventually operate without crowd supervision. Figure 3, we show the number of queries per example on Sentiment with two different features sets, UNIGRAMS and RNN (as described in Table 1). With simpler features (UNIGRAMS), the model saturates early and we will continue to need to query to the crowd to achieve our accuracy target (as specified by the loss function). On the other hand, using richer features (RNN) the model is able to learn from the crowd and the amount of supervision needed reduces over time. Note that even when the model capacity is limited, LENSE is able to guarantee a consistent, high level of performance. 7 Reproducibility. All code, data, and experiments for this paper are available on CodaLab at https://www.codalab.org/worksheets/0x2ae89944846444539c2d08a0b7ff3f6f/. 6 Related Work On-the-job learning draws ideas from many areas: online learning, active learning, active classification, crowdsourcing, and structured prediction. Online learning. The fundamental premise of online learning is that algorithms should improve with time, and there is a rich body of work in this area [7]. In our setting, algorithms not only improve over time, but maintain high accuracy from the beginning, whereas regret bounds only achieve this asymptotically. Active learning. Active learning (see [19] for a survey) algorithms strategically select most informative examples to build a classifier. Online active learning [8, 9, 10] performs active learning in the online setting. Several authors have also considered using crowd workers as a noisy oracle e.g. [20, 21]. It differs from our setup in that it assumes that labels can only be observed after classification, which makes it nearly impossible to maintain high accuracy in the beginning. Active classification. Active classification [22, 23, 24] asks what are the most informative features to measure at test time. Existing active classification algorithms rely on having a fully labeled dataset which is used to learn a static policy for when certain features should be queried, which does not change at test time. On-the-job learning differs from active classification in two respects: true labels are never observed, and our system improves itself at test time by learning a stronger model. A notable exception is Legion:AR [6], which like us operates in on-the-job learning setting to for real-time activity classification. However, they do not explore the machine learning foundations associated with operating in this setting, which is the aim of this paper. Crowdsourcing. A burgenoning subset of the crowdsourcing community overlaps with machine learning. One example is Flock [25], which first crowdsources the identification of features for an image classification task, and then asks the crowd to annotate these features so it can learn a decision tree. In another line of work, TurKontrol [26] models individual crowd worker reliability to optimize the number of human votes needed to achieve confident consensus using a POMDP. Structured prediction. An important aspect our prediction tasks is that the output is structured, which leads to a much richer setting for one-the-job learning. Since tags are correlated, the importance of a coherent framework for optimizing querying resources is increased. Making active partial observations on structures and has been explored in the measurements framework of [27] and in the distant supervision setting [28]. 7 Conclusion We have introduced a new framework that learns from (noisy) crowds on-the-job to maintain high accuracy, and reducing cost significantly over time. The technical core of our approach is modeling the on-the-job setting as a stochastic game and using ideas from game playing to approximate the optimal policy. We have built a system, LENSE, which obtains significant cost reductions over a pure crowd approach and significant accuracy improvements over a pure ML approach. Acknowledgments We are grateful to Kelvin Guu and Volodymyr Kuleshov for useful feedback regarding the calibration of our models and Amy Bearman for providing the image embeddings for the face classification experiments. We would also like to thank our anonymous reviewers for their helpful feedback. Finally, our work was sponsored by a Sloan Fellowship to the third author. References [1] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition (CVPR), pages 248–255, 2009. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems (NIPS), pages 1097–1105, 2012. 8 [3] M. S. Bernstein, G. Little, R. C. Miller, B. Hartmann, M. S. Ackerman, D. R. Karger, D. Crowell, and K. Panovich. Soylent: a word processor with a crowd inside. In Symposium on User Interface Software and Technology, pages 313–322, 2010. [4] N. Kokkalis, T. K¨ohn, C. Pfeiffer, D. Chornyi, M. S. Bernstein, and S. R. Klemmer. Emailvalet: Managing email overload through private, accountable crowdsourcing. In Conference on Computer Supported Cooperative Work, pages 1291–1300, 2013. [5] C. Li, J. Weng, Q. He, Y. Yao, A. Datta, A. Sun, and B. Lee. Twiner: named entity recognition in targeted twitter stream. In ACM Special Interest Group on Information Retreival (SIGIR), pages 721–730, 2012. [6] Walter S Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P Bigham. Real-time crowd labeling for deployable activity recognition. In Proceedings of the 2013 conference on Computer supported cooperative work, 2013. [7] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [8] D. Helmbold and S. Panizza. Some label efficient learning results. In Conference on Learning Theory (COLT), pages 218–230, 1997. [9] D. Sculley. Online active learning methods for fast label-efficient spam filtering. In Conference on Email and Anti-spam (CEAS), 2007. [10] W. Chu, M. Zinkevich, L. Li, A. Thomas, and B. Tseng. Unbiased online active learning in data streams. In International Conference on Knowledge Discovery and Data Mining (KDD), pages 195–203, 2011. [11] T. Gao and D. Koller. Active classification based on value of classifier. In Advances in Neural Information Processing Systems (NIPS), pages 1062–1070, 2011. [12] L. Kocsis and C. Szepesv´ari. Bandit based Monte-Carlo planning. In European Conference on Machine Learning (ECML), pages 282–293, 2006. [13] R. Coulom. Computing elo ratings of move patterns in the game of go. Computer Games Workshop, 2007. [14] J. R. Finkel, T. Grenager, and C. Manning. Incorporating non-local information into information extraction systems by Gibbs sampling. In Association for Computational Linguistics (ACL), pages 363–370, 2005. [15] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In ACL: HLT, pages 142–150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. [16] R. Socher, A. Perelygin, J. Y. Wu, J. Chuang, C. D. Manning, A. Y. Ng, and C. Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Empirical Methods in Natural Language Processing (EMNLP), 2013. [17] N. Kumar, A. C. Berg, P. N. Belhumeur, and S. K. Nayar. Attribute and Simile Classifiers for Face Verification. In ICCV, Oct 2009. [18] M. S. Bernstein, J. Brandt, R. C. Miller, and D. R. Karger. Crowds in two seconds: Enabling realtime crowd-powered interfaces. In User Interface Software and Technology, pages 33–42, 2011. [19] B. Settles. Active learning literature survey. Technical report, University of Wisconsin, Madison, 2010. [20] P. Donmez and J. G. Carbonell. Proactive learning: cost-sensitive active learning with multiple imperfect oracles. In Conference on Information and Knowledge Management (CIKM), pages 619–628, 2008. [21] D. Golovin, A. Krause, and D. Ray. Near-optimal Bayesian active learning with noisy observations. In Advances in Neural Information Processing Systems (NIPS), pages 766–774, 2010. [22] R. Greiner, A. J. Grove, and D. Roth. Learning cost-sensitive active classifiers. Artificial Intelligence, 139(2):137–174, 2002. [23] X. Chai, L. Deng, Q. Yang, and C. X. Ling. Test-cost sensitive naive Bayes classification. In International Conference on Data Mining, pages 51–58, 2004. [24] S. Esmeir and S. Markovitch. Anytime induction of cost-sensitive trees. In Advances in Neural Information Processing Systems (NIPS), pages 425–432, 2007. [25] J. Cheng and M. S. Bernstein. Flock: Hybrid Crowd-Machine learning classifiers. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, pages 600–611, 2015. [26] P. Dai, Mausam, and D. S. Weld. Decision-theoretic control of crowd-sourced workflows. In Association for the Advancement of Artificial Intelligence (AAAI), 2010. [27] P. Liang, M. I. Jordan, and D. Klein. Learning from measurements in exponential families. In International Conference on Machine Learning (ICML), 2009. [28] G. Angeli, J. Tibshirani, J. Y. Wu, and C. D. Manning. Combining distant and partial supervision for relation extraction. In Empirical Methods in Natural Language Processing (EMNLP), 2014. 9 | 2015 | 87 |
5,980 | Spatial Transformer Networks Max Jaderberg Karen Simonyan Andrew Zisserman Koray Kavukcuoglu Google DeepMind, London, UK {jaderberg,simonyan,zisserman,korayk}@google.com Abstract Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations. 1 Introduction Over recent years, the landscape of computer vision has been drastically altered and pushed forward through the adoption of a fast, scalable, end-to-end learning framework, the Convolutional Neural Network (CNN) [18]. Though not a recent invention, we now see a cornucopia of CNN-based models achieving state-of-the-art results in classification, localisation, semantic segmentation, and action recognition tasks, amongst others. A desirable property of a system which is able to reason about images is to disentangle object pose and part deformation from texture and shape. The introduction of local max-pooling layers in CNNs has helped to satisfy this property by allowing a network to be somewhat spatially invariant to the position of features. However, due to the typically small spatial support for max-pooling (e.g. 2 × 2 pixels) this spatial invariance is only realised over a deep hierarchy of max-pooling and convolutions, and the intermediate feature maps (convolutional layer activations) in a CNN are not actually invariant to large transformations of the input data [5, 19]. This limitation of CNNs is due to having only a limited, pre-defined pooling mechanism for dealing with variations in the spatial arrangement of data. In this work we introduce the Spatial Transformer module, that can be included into a standard neural network architecture to provide spatial transformation capabilities. The action of the spatial transformer is conditioned on individual data samples, with the appropriate behaviour learnt during training for the task in question (without extra supervision). Unlike pooling layers, where the receptive fields are fixed and local, the spatial transformer module is a dynamic mechanism that can actively spatially transform an image (or a feature map) by producing an appropriate transformation for each input sample. The transformation is then performed on the entire feature map (non-locally) and can include scaling, cropping, rotations, as well as non-rigid deformations. This allows networks which include spatial transformers to not only select regions of an image that are most relevant (attention), but also to transform those regions to a canonical, expected pose to simplify inference in the subsequent layers. Notably, spatial transformers can be trained with standard back-propagation, allowing for end-to-end training of the models they are injected in. 1 (a) (c) 7 (d) 5 6 (b) 9 4 Figure 1: The result of using a spatial transformer as the first layer of a fully-connected network trained for distorted MNIST digit classification. (a) The input to the spatial transformer network is an image of an MNIST digit that is distorted with random translation, scale, rotation, and clutter. (b) The localisation network of the spatial transformer predicts a transformation to apply to the input image. (c) The output of the spatial transformer, after applying the transformation. (d) The classification prediction produced by the subsequent fully-connected network on the output of the spatial transformer. The spatial transformer network (a CNN including a spatial transformer module) is trained end-to-end with only class labels – no knowledge of the groundtruth transformations is given to the system. Spatial transformers can be incorporated into CNNs to benefit multifarious tasks, for example: (i) image classification: suppose a CNN is trained to perform multi-way classification of images according to whether they contain a particular digit – where the position and size of the digit may vary significantly with each sample (and are uncorrelated with the class); a spatial transformer that crops out and scale-normalizes the appropriate region can simplify the subsequent classification task, and lead to superior classification performance, see Fig. 1; (ii) co-localisation: given a set of images containing different instances of the same (but unknown) class, a spatial transformer can be used to localise them in each image; (iii) spatial attention: a spatial transformer can be used for tasks requiring an attention mechanism, such as in [11, 29], but is more flexible and can be trained purely with backpropagation without reinforcement learning. A key benefit of using attention is that transformed (and so attended), lower resolution inputs can be used in favour of higher resolution raw inputs, resulting in increased computational efficiency. The rest of the paper is organised as follows: Sect. 2 discusses some work related to our own, we introduce the formulation and implementation of the spatial transformer in Sect. 3, and finally give the results of experiments in Sect. 4. Additional experiments and implementation details are given in the supplementary material or can be found in the arXiv version. 2 Related Work In this section we discuss the prior work related to the paper, covering the central ideas of modelling transformations with neural networks [12, 13, 27], learning and analysing transformation-invariant representations [3, 5, 8, 17, 19, 25], as well as attention and detection mechanisms for feature selection [1, 6, 9, 11, 23]. Early work by Hinton [12] looked at assigning canonical frames of reference to object parts, a theme which recurred in [13] where 2D affine transformations were modeled to create a generative model composed of transformed parts. The targets of the generative training scheme are the transformed input images, with the transformations between input images and targets given as an additional input to the network. The result is a generative model which can learn to generate transformed images of objects by composing parts. The notion of a composition of transformed parts is taken further by Tieleman [27], where learnt parts are explicitly affine-transformed, with the transform predicted by the network. Such generative capsule models are able to learn discriminative features for classification from transformation supervision. The invariance and equivariance of CNN representations to input image transformations are studied in [19] by estimating the linear relationships between representations of the original and transformed images. Cohen & Welling [5] analyse this behaviour in relation to symmetry groups, which is also exploited in the architecture proposed by Gens & Domingos [8], resulting in feature maps that are more invariant to symmetry groups. Other attempts to design transformation invariant representations are scattering networks [3], and CNNs that construct filter banks of transformed filters [17, 25]. Stollenga et al. [26] use a policy based on a network’s activations to gate the responses of the network’s filters for a subsequent forward pass of the same image and so can allow attention to specific features. In this work, we aim to achieve invariant representations by manipulating the data rather than the feature extractors, something that was done for clustering in [7]. Neural networks with selective attention manipulate the data by taking crops, and so are able to learn translation invariance. Work such as [1, 23] are trained with reinforcement learning to avoid the 2 ] ] ] ] U V Localisation net Sampler Spatial Transformer Grid ! generator ] T✓(G) ✓ Figure 2: The architecture of a spatial transformer module. The input feature map U is passed to a localisation network which regresses the transformation parameters θ. The regular spatial grid G over V is transformed to the sampling grid Tθ(G), which is applied to U as described in Sect. 3.3, producing the warped output feature map V . The combination of the localisation network and sampling mechanism defines a spatial transformer. need for a differentiable attention mechanism, while [11] use a differentiable attention mechansim by utilising Gaussian kernels in a generative model. The work by Girshick et al. [9] uses a region proposal algorithm as a form of attention, and [6] show that it is possible to regress salient regions with a CNN. The framework we present in this paper can be seen as a generalisation of differentiable attention to any spatial transformation. 3 Spatial Transformers In this section we describe the formulation of a spatial transformer. This is a differentiable module which applies a spatial transformation to a feature map during a single forward pass, where the transformation is conditioned on the particular input, producing a single output feature map. For multi-channel inputs, the same warping is applied to each channel. For simplicity, in this section we consider single transforms and single outputs per transformer, however we can generalise to multiple transformations, as shown in experiments. The spatial transformer mechanism is split into three parts, shown in Fig. 2. In order of computation, first a localisation network (Sect. 3.1) takes the input feature map, and through a number of hidden layers outputs the parameters of the spatial transformation that should be applied to the feature map – this gives a transformation conditional on the input. Then, the predicted transformation parameters are used to create a sampling grid, which is a set of points where the input map should be sampled to produce the transformed output. This is done by the grid generator, described in Sect. 3.2. Finally, the feature map and the sampling grid are taken as inputs to the sampler, producing the output map sampled from the input at the grid points (Sect. 3.3). The combination of these three components forms a spatial transformer and will now be described in more detail in the following sections. 3.1 Localisation Network The localisation network takes the input feature map U ∈RH×W ×C with width W, height H and C channels and outputs θ, the parameters of the transformation Tθ to be applied to the feature map: θ = floc(U). The size of θ can vary depending on the transformation type that is parameterised, e.g. for an affine transformation θ is 6-dimensional as in (1). The localisation network function floc() can take any form, such as a fully-connected network or a convolutional network, but should include a final regression layer to produce the transformation parameters θ. 3.2 Parameterised Sampling Grid To perform a warping of the input feature map, each output pixel is computed by applying a sampling kernel centered at a particular location in the input feature map (this is described fully in the next section). By pixel we refer to an element of a generic feature map, not necessarily an image. In general, the output pixels are defined to lie on a regular grid G = {Gi} of pixels Gi = (xt i, yt i), forming an output feature map V ∈RH′×W ′×C, where H′ and W ′ are the height and width of the grid, and C is the number of channels, which is the same in the input and output. For clarity of exposition, assume for the moment that Tθ is a 2D affine transformation Aθ. We will discuss other transformations below. In this affine case, the pointwise transformation is xs i ys i = Tθ(Gi) = Aθ xt i yt i 1 = θ11 θ12 θ13 θ21 θ22 θ23 xt i yt i 1 (1) 3 (a) (b) Figure 3: Two examples of applying the parameterised sampling grid to an image U producing the output V . (a) The sampling grid is the regular grid G = TI(G), where I is the identity transformation parameters. (b) The sampling grid is the result of warping the regular grid with an affine transformation Tθ(G). where (xt i, yt i) are the target coordinates of the regular grid in the output feature map, (xs i, ys i ) are the source coordinates in the input feature map that define the sample points, and Aθ is the affine transformation matrix. We use height and width normalised coordinates, such that −1 ≤xt i, yt i ≤1 when within the spatial bounds of the output, and −1 ≤xs i, ys i ≤1 when within the spatial bounds of the input (and similarly for the y coordinates). The source/target transformation and sampling is equivalent to the standard texture mapping and coordinates used in graphics. The transform defined in (1) allows cropping, translation, rotation, scale, and skew to be applied to the input feature map, and requires only 6 parameters (the 6 elements of Aθ) to be produced by the localisation network. It allows cropping because if the transformation is a contraction (i.e. the determinant of the left 2 × 2 sub-matrix has magnitude less than unity) then the mapped regular grid will lie in a parallelogram of area less than the range of xs i, ys i . The effect of this transformation on the grid compared to the identity transform is shown in Fig. 3. The class of transformations Tθ may be more constrained, such as that used for attention Aθ = s 0 tx 0 s ty (2) allowing cropping, translation, and isotropic scaling by varying s, tx, and ty. The transformation Tθ can also be more general, such as a plane projective transformation with 8 parameters, piecewise affine, or a thin plate spline. Indeed, the transformation can have any parameterised form, provided that it is differentiable with respect to the parameters – this crucially allows gradients to be backpropagated through from the sample points Tθ(Gi) to the localisation network output θ. If the transformation is parameterised in a structured, low-dimensional way, this reduces the complexity of the task assigned to the localisation network. For instance, a generic class of structured and differentiable transformations, which is a superset of attention, affine, projective, and thin plate spline transformations, is Tθ = MθB, where B is a target grid representation (e.g. in (1), B is the regular grid G in homogeneous coordinates), and Mθ is a matrix parameterised by θ. In this case it is possible to not only learn how to predict θ for a sample, but also to learn B for the task at hand. 3.3 Differentiable Image Sampling To perform a spatial transformation of the input feature map, a sampler must take the set of sampling points Tθ(G), along with the input feature map U and produce the sampled output feature map V . Each (xs i, ys i ) coordinate in Tθ(G) defines the spatial location in the input where a sampling kernel is applied to get the value at a particular pixel in the output V . This can be written as V c i = H X n W X m U c nmk(xs i −m; Φx)k(ys i −n; Φy) ∀i ∈[1 . . . H′W ′] ∀c ∈[1 . . . C] (3) where Φx and Φy are the parameters of a generic sampling kernel k() which defines the image interpolation (e.g. bilinear), U c nm is the value at location (n, m) in channel c of the input, and V c i is the output value for pixel i at location (xt i, yt i) in channel c. Note that the sampling is done identically for each channel of the input, so every channel is transformed in an identical way (this preserves spatial consistency between channels). 4 In theory, any sampling kernel can be used, as long as (sub-)gradients can be defined with respect to xs i and ys i . For example, using the integer sampling kernel reduces (3) to V c i = H X n W X m U c nmδ(⌊xs i + 0.5⌋−m)δ(⌊ys i + 0.5⌋−n) (4) where ⌊x + 0.5⌋rounds x to the nearest integer and δ() is the Kronecker delta function. This sampling kernel equates to just copying the value at the nearest pixel to (xs i, ys i ) to the output location (xt i, yt i). Alternatively, a bilinear sampling kernel can be used, giving V c i = H X n W X m U c nm max(0, 1 −|xs i −m|) max(0, 1 −|ys i −n|) (5) To allow backpropagation of the loss through this sampling mechanism we can define the gradients with respect to U and G. For bilinear sampling (5) the partial derivatives are ∂V c i ∂U cnm = H X n W X m max(0, 1 −|xs i −m|) max(0, 1 −|ys i −n|) (6) ∂V c i ∂xs i = H X n W X m U c nm max(0, 1 −|ys i −n|) 0 if |m −xs i| ≥1 1 if m ≥xs i −1 if m < xs i (7) and similarly to (7) for ∂V c i ∂ys i . This gives us a (sub-)differentiable sampling mechanism, allowing loss gradients to flow back not only to the input feature map (6), but also to the sampling grid coordinates (7), and therefore back to the transformation parameters θ and localisation network since ∂xs i ∂θ and ∂xs i ∂θ can be easily derived from (1) for example. Due to discontinuities in the sampling fuctions, sub-gradients must be used. This sampling mechanism can be implemented very efficiently on GPU, by ignoring the sum over all input locations and instead just looking at the kernel support region for each output pixel. 3.4 Spatial Transformer Networks The combination of the localisation network, grid generator, and sampler form a spatial transformer (Fig. 2). This is a self-contained module which can be dropped into a CNN architecture at any point, and in any number, giving rise to spatial transformer networks. This module is computationally very fast and does not impair the training speed, causing very little time overhead when used naively, and even potential speedups in attentive models due to subsequent downsampling that can be applied to the output of the transformer. Placing spatial transformers within a CNN allows the network to learn how to actively transform the feature maps to help minimise the overall cost function of the network during training. The knowledge of how to transform each training sample is compressed and cached in the weights of the localisation network (and also the weights of the layers previous to a spatial transformer) during training. For some tasks, it may also be useful to feed the output of the localisation network, θ, forward to the rest of the network, as it explicitly encodes the transformation, and hence the pose, of a region or object. It is also possible to use spatial transformers to downsample or oversample a feature map, as one can define the output dimensions H′ and W ′ to be different to the input dimensions H and W. However, with sampling kernels with a fixed, small spatial support (such as the bilinear kernel), downsampling with a spatial transformer can cause aliasing effects. Finally, it is possible to have multiple spatial transformers in a CNN. Placing multiple spatial transformers at increasing depths of a network allow transformations of increasingly abstract representations, and also gives the localisation networks potentially more informative representations to base the predicted transformation parameters on. One can also use multiple spatial transformers in parallel – this can be useful if there are multiple objects or parts of interest in a feature map that should be focussed on individually. A limitation of this architecture in a purely feed-forward network is that the number of parallel spatial transformers limits the number of objects that the network can model. 5 MNIST Distortion Model R RTS P E FCN 2.1 5.2 3.1 3.2 CNN 1.2 0.8 1.5 1.4 ST-FCN Aff 1.2 0.8 1.5 2.7 Proj 1.3 0.9 1.4 2.6 TPS 1.1 0.8 1.4 2.4 ST-CNN Aff 0.7 0.5 0.8 1.2 Proj 0.8 0.6 0.8 1.3 TPS 0.7 0.5 0.8 1.1 (a) (c) (b) R R R RTS E E (c) (b) 58° (a) -65° 93° Table 1: Left: The percentage errors for different models on different distorted MNIST datasets. The different distorted MNIST datasets we test are TC: translated and cluttered, R: rotated, RTS: rotated, translated, and scaled, P: projective distortion, E: elastic distortion. All the models used for each experiment have the same number of parameters, and same base structure for all experiments. Right: Some example test images where a spatial transformer network correctly classifies the digit but a CNN fails. (a) The inputs to the networks. (b) The transformations predicted by the spatial transformers, visualised by the grid Tθ(G). (c) The outputs of the spatial transformers. E and RTS examples use thin plate spline spatial transformers (ST-CNN TPS), while R examples use affine spatial transformers (ST-CNN Aff) with the angles of the affine transformations given. For videos showing animations of these experiments and more see https://goo.gl/qdEhUu. 4 Experiments In this section we explore the use of spatial transformer networks on a number of supervised learning tasks. In Sect. 4.1 we begin with experiments on distorted versions of the MNIST handwriting dataset, showing the ability of spatial transformers to improve classification performance through actively transforming the input images. In Sect. 4.2 we test spatial transformer networks on a challenging real-world dataset, Street View House Numbers [21], for number recognition, showing stateof-the-art results using multiple spatial transformers embedded in the convolutional stack of a CNN. Finally, in Sect. 4.3, we investigate the use of multiple parallel spatial transformers for fine-grained classification, showing state-of-the-art performance on CUB-200-2011 birds dataset [28] by automatically discovering object parts and learning to attend to them. Further experiments with MNIST addition and co-localisation can be found in the supplementary material. 4.1 Distorted MNIST In this section we use the MNIST handwriting dataset as a testbed for exploring the range of transformations to which a network can learn invariance to by using a spatial transformer. We begin with experiments where we train different neural network models to classify MNIST data that has been distorted in various ways: rotation (R); rotation, scale and translation (RTS); projective transformation (P); elastic warping (E) – note that elastic warping is destructive and cannot be inverted in some cases. The full details of the distortions used to generate this data are given in the supplementary material. We train baseline fully-connected (FCN) and convolutional (CNN) neural networks, as well as networks with spatial transformers acting on the input before the classification network (ST-FCN and ST-CNN). The spatial transformer networks all use bilinear sampling, but variants use different transformation functions: an affine transformation (Aff), projective transformation (Proj), and a 16-point thin plate spline transformation (TPS) with a regular grid of control points. The CNN models include two max-pooling layers. All networks have approximately the same number of parameters, are trained with identical optimisation schemes (backpropagation, SGD, scheduled learning rate decrease, with a multinomial cross entropy loss), and all with three weight layers in the classification network. The results of these experiments are shown in Table 1 (left). Looking at any particular type of distortion of the data, it is clear that a spatial transformer enabled network outperforms its counterpart base network. For the case of rotation, translation, and scale distortion (RTS), the ST-CNN achieves 0.5% and 0.6% depending on the class of transform used for Tθ, whereas a CNN, with two maxpooling layers to provide spatial invariance, achieves 0.8% error. This is in fact the same error that the ST-FCN achieves, which is without a single convolution or max-pooling layer in its network, showing that using a spatial transformer is an alternative way to achieve spatial invariance. ST-CNN models consistently perform better than ST-FCN models due to max-pooling layers in ST-CNN providing even more spatial invariance, and convolutional layers better modelling local structure. We also test our models in a noisy environment, on 60 × 60 images with translated MNIST digits and 6 Size Model 64px 128px Maxout CNN [10] 4.0 CNN (ours) 4.0 5.6 DRAM* [1] 3.9 4.5 ST-CNN Single 3.7 3.9 Multi 3.6 3.9 ST conv ST conv ST conv ST … 2! 6! 0 (a) (b) ⇥ Table 2: Left: The sequence error (%) for SVHN multi-digit recognition on crops of 64×64 pixels (64px), and inflated crops of 128 × 128 (128px) which include more background. *The best reported result from [1] uses model averaging and Monte Carlo averaging, whereas the results from other models are from a single forward pass of a single model. Right: (a) The schematic of the ST-CNN Multi model. The transformations of each spatial transformer (ST) are applied to the convolutional feature map produced by the previous layer. (b) The result of the composition of the affine transformations predicted by the four spatial transformers in ST-CNN Multi, visualised on the input image. background clutter (see Fig. 1 third row for an example): an FCN gets 13.2% error, a CNN gets 3.5% error, while an ST-FCN gets 2.0% error and an ST-CNN gets 1.7% error. Looking at the results between different classes of transformation, the thin plate spline transformation (TPS) is the most powerful, being able to reduce error on elastically deformed digits by reshaping the input into a prototype instance of the digit, reducing the complexity of the task for the classification network, and does not over fit on simpler data e.g. R. Interestingly, the transformation of inputs for all ST models leads to a “standard” upright posed digit – this is the mean pose found in the training data. In Table 1 (right), we show the transformations performed for some test cases where a CNN is unable to correctly classify the digit, but a spatial transformer network can. 4.2 Street View House Numbers We now test our spatial transformer networks on a challenging real-world dataset, Street View House Numbers (SVHN) [21]. This dataset contains around 200k real world images of house numbers, with the task to recognise the sequence of numbers in each image. There are between 1 and 5 digits in each image, with a large variability in scale and spatial arrangement. We follow the experimental setup as in [1, 10], where the data is preprocessed by taking 64 × 64 crops around each digit sequence. We also use an additional more loosely 128×128 cropped dataset as in [1]. We train a baseline character sequence CNN model with 11 hidden layers leading to five independent softmax classifiers, each one predicting the digit at a particular position in the sequence. This is the character sequence model used in [16], where each classifier includes a null-character output to model variable length sequences. This model matches the results obtained in [10]. We extend this baseline CNN to include a spatial transformer immediately following the input (STCNN Single), where the localisation network is a four-layer CNN. We also define another extension where before each of the first four convolutional layers of the baseline CNN, we insert a spatial transformer (ST-CNN Multi). In this case, the localisation networks are all two-layer fully connected networks with 32 units per layer. In the ST-CNN Multi model, the spatial transformer before the first convolutional layer acts on the input image as with the previous experiments, however the subsequent spatial transformers deeper in the network act on the convolutional feature maps, predicting a transformation from them and transforming these feature maps (this is visualised in Table 2 (right) (a)). This allows deeper spatial transformers to predict a transformation based on richer features rather than the raw image. All networks are trained from scratch with SGD and dropout [14], with randomly initialised weights, except for the regression layers of spatial transformers which are initialised to predict the identity transform. Affine transformations and bilinear sampling kernels are used for all spatial transformer networks in these experiments. The results of this experiment are shown in Table 2 (left) – the spatial transformer models obtain state-of-the-art results, reaching 3.6% error on 64×64 images compared to previous state-of-the-art of 3.9% error. Interestingly on 128 × 128 images, while other methods degrade in performance, an ST-CNN achieves 3.9% error while the previous state of the art at 4.5% error is with a recurrent attention model that uses an ensemble of models with Monte Carlo averaging – in contrast the STCNN models require only a single forward pass of a single model. This accuracy is achieved due to the fact that the spatial transformers crop and rescale the parts of the feature maps that correspond to the digit, focussing resolution and network capacity only on these areas (see Table 2 (right) (b) 7 Model Cimpoi ’15 [4] 66.7 Zhang ’14 [30] 74.9 Branson ’14 [2] 75.7 Lin ’15 [20] 80.9 Simon ’15 [24] 81.0 CNN (ours) 224px 82.3 2×ST-CNN 224px 83.1 2×ST-CNN 448px 83.9 4×ST-CNN 448px 84.1 Table 3: Left: The accuracy (%) on CUB-200-2011 bird classification dataset. Spatial transformer networks with two spatial transformers (2×ST-CNN) and four spatial transformers (4×ST-CNN) in parallel outperform other models. 448px resolution images can be used with the ST-CNN without an increase in computational cost due to downsampling to 224px after the transformers. Right: The transformation predicted by the spatial transformers of 2×ST-CNN (top row) and 4×ST-CNN (bottom row) on the input image. Notably for the 2×STCNN, one of the transformers (shown in red) learns to detect heads, while the other (shown in green) detects the body, and similarly for the 4×ST-CNN. for some examples). In terms of computation speed, the ST-CNN Multi model is only 6% slower (forward and backward pass) than the CNN. 4.3 Fine-Grained Classification In this section, we use a spatial transformer network with multiple transformers in parallel to perform fine-grained bird classification. We evaluate our models on the CUB-200-2011 birds dataset [28], containing 6k training images and 5.8k test images, covering 200 species of birds. The birds appear at a range of scales and orientations, are not tightly cropped, and require detailed texture and shape analysis to distinguish. In our experiments, we only use image class labels for training. We consider a strong baseline CNN model – an Inception architecture with batch normalisation [15] pre-trained on ImageNet [22] and fine-tuned on CUB – which by itself achieves state-of-the-art accuracy of 82.3% (previous best result is 81.0% [24]). We then train a spatial transformer network, ST-CNN, which contains 2 or 4 parallel spatial transformers, parameterised for attention and acting on the input image. Discriminative image parts, captured by the transformers, are passed to the part description sub-nets (each of which is also initialised by Inception). The resulting part representations are concatenated and classified with a single softmax layer. The whole architecture is trained on image class labels end-to-end with backpropagation (details in supplementary material). The results are shown in Table 3 (left). The 4×ST-CNN achieves an accuracy of 84.1%, outperforming the baseline by 1.8%. In the visualisations of the transforms predicted by 2×ST-CNN (Table 3 (right)) one can see interesting behaviour has been learnt: one spatial transformer (red) has learnt to become a head detector, while the other (green) fixates on the central part of the body of a bird. The resulting output from the spatial transformers for the classification network is a somewhat posenormalised representation of a bird. While previous work such as [2] explicitly define parts of the bird, training separate detectors for these parts with supplied keypoint training data, the ST-CNN is able to discover and learn part detectors in a data-driven manner without any additional supervision. In addition, spatial transformers allows for the use of 448px resolution input images without any impact on performance, as the output of the transformed 448px images are sampled at 224px before being processed. 5 Conclusion In this paper we introduced a new self-contained module for neural networks – the spatial transformer. This module can be dropped into a network and perform explicit spatial transformations of features, opening up new ways for neural networks to model data, and is learnt in an end-toend fashion, without making any changes to the loss function. While CNNs provide an incredibly strong baseline, we see gains in accuracy using spatial transformers across multiple tasks, resulting in state-of-the-art performance. Furthermore, the regressed transformation parameters from the spatial transformer are available as an output and could be used for subsequent tasks. While we only explore feed-forward networks in this work, early experiments show spatial transformers to be powerful in recurrent models, and useful for tasks requiring the disentangling of object reference frames. 8 References [1] J. Ba, V. Mnih, and K. Kavukcuoglu. Multiple object recognition with visual attention. ICLR, 2015. [2] S. Branson, G. Van Horn, S. Belongie, and P. Perona. Bird species categorization using pose normalized deep convolutional nets. BMVC., 2014. [3] J. Bruna and S. Mallat. Invariant scattering convolution networks. IEEE PAMI, 35(8):1872–1886, 2013. [4] M. Cimpoi, S. Maji, and A. Vedaldi. Deep filter banks for texture recognition and segmentation. In CVPR, 2015. [5] T. S. Cohen and M. Welling. Transformation properties of learned visual representations. ICLR, 2015. [6] D. Erhan, C. Szegedy, A. Toshev, and D. Anguelov. Scalable object detection using deep neural networks. In CVPR, 2014. [7] B. J. Frey and N. Jojic. Fast, large-scale transformation-invariant clustering. In NIPS, 2001. [8] R. Gens and P. M. Domingos. Deep symmetry networks. In NIPS, 2014. [9] R. B. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. [10] I. J. Goodfellow, Y. Bulatov, J. Ibarz, S. Arnoud, and V. Shet. Multi-digit number recognition from street view imagery using deep convolutional neural networks. arXiv:1312.6082, 2013. [11] K. Gregor, I. Danihelka, A. Graves, and D. Wierstra. Draw: A recurrent neural network for image generation. ICML, 2015. [12] G. E. Hinton. A parallel computation that assigns canonical object-based frames of reference. In IJCAI, 1981. [13] G. E. Hinton, A. Krizhevsky, and S. D. Wang. Transforming auto-encoders. In ICANN. 2011. [14] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580, 2012. [15] S. Ioffe and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. ICML, 2015. [16] M. Jaderberg, K. Simonyan, A. Vedaldi, and A. Zisserman. Synthetic data and artificial neural networks for natural scene text recognition. NIPS DLW, 2014. [17] A. Kanazawa, A. Sharma, and D. Jacobs. Locally scale-invariant convolutional neural networks. In NIPS, 2014. [18] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. [19] K. Lenc and A. Vedaldi. Understanding image representations by measuring their equivariance and equivalence. CVPR, 2015. [20] T. Lin, A. RoyChowdhury, and S. Maji. Bilinear CNN models for fine-grained visual recognition. arXiv:1504.07889, 2015. [21] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural images with unsupervised feature learning. In NIPS DLW, 2011. [22] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. arXiv:1409.0575, 2014. [23] P. Sermanet, A. Frome, and E. Real. Attention for fine-grained categorization. arXiv:1412.7054, 2014. [24] M. Simon and E. Rodner. Neural activation constellations: Unsupervised part model discovery with convolutional networks. arXiv:1504.08289, 2015. [25] K. Sohn and H. Lee. Learning invariant representations with local transformations. arXiv:1206.6418, 2012. [26] M. F. Stollenga, J. Masci, F. Gomez, and J. Schmidhuber. Deep networks with internal selective attention through feedback connections. In NIPS, 2014. [27] T. Tieleman. Optimizing Neural Networks that Generate Images. PhD thesis, University of Toronto, 2014. [28] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 dataset. 2011. [29] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio. Show, attend and tell: Neural image caption generation with visual attention. ICML, 2015. [30] X. Zhang, J. Zou, X. Ming, K. He, and J. Sun. Efficient and accurate approximations of nonlinear convolutional networks. arXiv:1411.4229, 2014. 9 | 2015 | 88 |
5,981 | Precision-Recall-Gain Curves: PR Analysis Done Right Peter A. Flach Intelligent Systems Laboratory University of Bristol, United Kingdom Peter.Flach@bristol.ac.uk Meelis Kull Intelligent Systems Laboratory University of Bristol, United Kingdom Meelis.Kull@bristol.ac.uk Abstract Precision-Recall analysis abounds in applications of binary classification where true negatives do not add value and hence should not affect assessment of the classifier’s performance. Perhaps inspired by the many advantages of receiver operating characteristic (ROC) curves and the area under such curves for accuracybased performance assessment, many researchers have taken to report PrecisionRecall (PR) curves and associated areas as performance metric. We demonstrate in this paper that this practice is fraught with difficulties, mainly because of incoherent scale assumptions – e.g., the area under a PR curve takes the arithmetic mean of precision values whereas the Fβ score applies the harmonic mean. We show how to fix this by plotting PR curves in a different coordinate system, and demonstrate that the new Precision-Recall-Gain curves inherit all key advantages of ROC curves. In particular, the area under Precision-Recall-Gain curves conveys an expected F1 score on a harmonic scale, and the convex hull of a PrecisionRecall-Gain curve allows us to calibrate the classifier’s scores so as to determine, for each operating point on the convex hull, the interval of β values for which the point optimises Fβ. We demonstrate experimentally that the area under traditional PR curves can easily favour models with lower expected F1 score than others, and so the use of Precision-Recall-Gain curves will result in better model selection. 1 Introduction and Motivation In machine learning and related areas we often need to optimise multiple performance measures, such as per-class classification accuracies, precision and recall in information retrieval, etc. We then have the option to fix a particular way to trade off these performance measures: e.g., we can use overall classification accuracy which gives equal weight to correctly classified instances regardless of their class; or we can use the F1 score which takes the harmonic mean of precision and recall. However, multi-objective optimisation suggests that to delay fixing a trade-off for as long as possible has practical benefits, such as the ability to adapt a model or set of models to changing operating contexts. The latter is essentially what receiver operating characteristic (ROC) curves do for binary classification. In an ROC plot we plot true positive rate (the proportion of correctly classified positives, also denoted tpr) on the y-axis against false positive rate (the proportion of incorrectly classified negatives, also denoted fpr) on the x-axis. A categorical classifier evaluated on a test set gives rise to a single ROC point, while a classifier which outputs scores (henceforth called a model) can generate a set of points (commonly referred to as the ROC curve) by varying the decision threshold (Figure 1 (left)). ROC curves are widely used in machine learning and their main properties are well understood [3]. These properties can be summarised as follows. 1 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False positive rate True positive rate 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision Figure 1: (left) ROC curve with non-dominated points (red circles) and convex hull (red dotted line). (right) Corresponding Precision-Recall curve with non-dominated points (red circles). Universal baselines: the major diagonal of an ROC plot depicts the line of random performance which can be achieved without training. More specifically, a random classifier assigning the positive class with probability p and the negative class with probability 1−p has expected true positive rate of p and true negative rate of 1−p, represented by the ROC point (p, p). The upper-left (lower-right) triangle of ROC plots hence denotes better (worse) than random performance. Related baselines include the always-negative and always-positive classifier which occupy fixed points in ROC plots (the origin and the upper right-hand corner, respectively). These baselines are universal as they don’t depend on the class distribution. Linear interpolation: any point on a straight line between two points representing the performance of two classifiers (or thresholds) A and B can be achieved by making a suitably biased random choice between A and B [14]. Effectively this creates an interpolated contingency table which is a linear combination of the contingency tables of A and B, and since all three tables involve the same numbers of positives and negatives it follows that the interpolated accuracy as well as true and false positive rates are also linear combinations of the corresponding quantities pertaining to A and B. The slope of the connecting line determines the trade-off between the classes under which any linear combination of A and B would yield equivalent performance. In particular, test set accuracy assuming uniform misclassification costs is represented by accuracy isometrics with slope (1 −π)/π, where π is the proportion of positives [5]. Optimality: a point D dominates another point E if D’s tpr and fpr are not worse than E’s and at least one of them is strictly better. The set of non-dominated points – the Pareto front – establishes the set of classifiers or thresholds that are optimal under some trade-off between the classes. Due to linearity any interpolation between non-dominated points is both achievable and non-dominated, giving rise to the convex hull (ROCCH) which can be easily constructed both algorithmically and by visual inspection. Area: the proportion of the unit square which falls under an ROC curve (AUROC) has a well-known meaning as a ranking performance measure: it estimates the probability that a randomly chosen positive is ranked higher by the model than a randomly chosen negative [7]. More importantly in a classification context, there is a linear relationship between AUROC = R 1 0 tpr d fpr and the expected accuracy acc = πtpr + (1 −π)(1 −fpr) averaged over all possible predicted positive rates rate = πtpr + (1 −π)fpr which can be established by a change of variable: E[acc] = R 1 0 acc d rate = π(1−π)(2AUROC −1)+1/2 [8]. Calibration: slopes of convex hull segments can be interpreted as empirical likelihood ratios associated with a particular interval of raw classifier scores. This gives rise to a non-parametric calibration procedure which is also called isotonic regression [19] or pool adjacent violators [4] and results in a calibration map which maps each segment of ROCCH with slope r to a calibrated score c = πr/(πr+(1−π)) [6]. Define a skew-sensitive version of accuracy as accc ≜2cπtpr+2(1−c)(1−π)(1−fpr) (i.e., standard accuracy is accc=1/2) then a per2 fectly calibrated classifier outputs, for every instance, the value of c for which the instance is on the accc decision boundary. Alternative solutions for each of these exist. For example, parametric alternatives to ROCCH calibration exist based on the logistic function, e.g. Platt scaling [13]; as do alternative ways to aggregate classification performance across different operating points, e.g. the Brier score [8]. However, the power of ROC analysis derives from the combination of the above desirable properties, which helps to explain its popularity across the machine learning discipline. This paper presents fundamental improvements in Precision-Recall analysis, inspired by ROC analysis, as follows. (i) We identify in Section 2 the problems with current practice in Precision-Recall curves by demonstrating that they fail to satisfy each of the above properties in some respect. (ii) We propose a principled way to remedy all these problems by means of a change of coordinates in Section 3. (iii) In particular, our improved Precision-Recall-Gain curves enclose an area that is directly related to expected F1 score – on a harmonic scale – in a similar way as AUROC is related to expected accuracy. (iv) Furthermore, with Precision-Recall-Gain curves it is possible to calibrate a model for Fβ in the sense that the predicted score for any instance determines the value of β for which the instance is on the Fβ decision boundary. (v) We give experimental evidence in Section 4 that this matters by demonstrating that the area under traditional Precision-Recall curves can easily favour models with lower expected F1 score than others. Proofs of the formal results are found in the Supplementary Material; see also http://www.cs. bris.ac.uk/˜flach/PRGcurves/. 2 Traditional Precision-Recall Analysis Over-abundance of negative examples is a common phenomenon in many subfields of machine learning and data mining, including information retrieval, recommender systems and social network analysis. Indeed, most web pages are irrelevant for most queries, and most links are absent from most networks. Classification accuracy is not a sensible evaluation measure in such situations, as it over-values the always-negative classifier. Neither does adjusting the class imbalance through costsensitive versions of accuracy help, as this will not just downplay the benefit of true negatives but also the cost of false positives. A good solution in this case is to ignore true negatives altogether and use precision, defined as the proportion of true positives among the positive predictions, as performance metric instead of false positive rate. In this context, the true positive rate is usually renamed to recall. More formally, we define precision as prec = TP/(TP+FP) and recall as rec = TP/(TP+FN), where TP, FP and FN denote the number of true positives, false positives and false negatives, respectively. Perhaps motivated by the appeal of ROC plots, many researchers have begun to produce PrecisionRecall or PR plots with precision on the y-axis against recall on the x-axis. Figure 1 (right) shows the PR curve corresponding to the ROC curve on the left. Clearly there is a one-to-one correspondence between the two plots as both are based on the same contingency tables [2]. In particular, precision associated with an ROC point is proportional to the angle between the line connecting the point with the origin and the x-axis. However, this is where the similarity ends as PR plots have none of the aforementioned desirable properties of ROC plots. Non-universal baselines: a random classifier has precision π and hence baseline performance is a horizontal line which depends on the class distribution. The always-positive classifier is at the right-most end of this baseline (the always-negative classifier has undefined precision). Non-linear interpolation: the main reason for this is that precision in a linearly interpolated contingency table is only a linear combination of the original precision values if the two classifiers have the same predicted positive rate (which is impossible if the two contingency tables arise from different decision thresholds on the same model). [2] discusses this further and also gives an interpolation formula. More generally, it isn’t meaningful to take the arithmetic average of precision values. Non-convex Pareto front: the set of non-dominated operating points continues to be well-defined (see the red circles in Figure 1 (right)) but in the absence of linear interpolation this set isn’t convex for PR curves, nor is it straightforward to determine by visual inspection. 3 Uninterpretable area: although many authors report the area under the PR curve (AUPR) it doesn’t have a meaningful interpretation beyond the geometric one of expected precision when uniformly varying the recall (and even then the use of the arithmetic average cannot be justified). Furthermore, PR plots have unachievable regions at the lower right-hand side, the size of which depends on the class distribution [1]. No calibration: although some results exist regarding the relationship between calibrated scores and F1 score (more about this below) these are unrelated to the PR curve. To the best of our knowledge there is no published procedure to output scores that are calibrated for Fβ – that is, which give the value of β for which the instance is on the Fβ decision boundary. 2.1 The Fβ measure The standard way to combine precision and recall into a single performance measure is through the F1 score [16]. It is commonly defined as the harmonic mean of precision and recall: F1 ≜ 2 1/prec+1/rec = 2prec·rec prec+rec = TP TP+(FP+FN)/2 (1) The last form demonstrates that the harmonic mean is natural here as it corresponds to taking the arithmetic mean of the numbers of false positives and false negatives. Another way to understand the F1 score is as the accuracy in a modified contingency table which copies the true positive count to the true negatives: Predicted ⊕ Predicted ⊖ Actual ⊕ TP FN Pos Actual ⊖ FP TP Neg−(TN −TP) TP+FP Pos 2TP+FP+FN We can take a weighted harmonic mean which is commonly parametrised as follows: Fβ ≜ 1 1 1+β 2 /prec+ β 2 1+β 2 /rec = (1+β 2)TP (1+β 2)TP+FP+β 2FN (2) There is a range of recent papers studying the F-score, several of which in last year’s NIPS conference [12, 9, 11]. Relevant results include the following: (i) non-decomposability of the Fβ score, meaning it is not an average over instances (it is a ratio of such averages, called a pseudo-linear function by [12]); (ii) estimators exist that are consistent: i.e., they are unbiased in the limit [9, 11]; (iii) given a model, operating points that are optimal for Fβ can be achieved by thresholding the model’s scores [18]; (iv) a classifier yielding perfectly calibrated posterior probabilities has the property that the optimal threshold for F1 is half the optimal F1 at that point (first proved by [20] and later by [10], while generalised to Fβ by [9]). The latter results tell us that optimal thresholds for Fβ are lower than optimal thresholds for accuracy (or equal only in the case of the perfect model). They don’t, however, tell us how to find such thresholds other than by tuning (and [12] propose a method inspired by cost-sensitive classification). The analysis in the next section significantly extends these results by demonstrating how we can identify all Fβ-optimal thresholds for any β in a single calibration procedure. 3 Precision-Recall-Gain Curves In this section we demonstrate how Precision-Recall analysis can be adapted to inherit all the benefits of ROC analysis. While technically straightforward, the implications of our results are far-reaching. For example, even something as seemingly innocuous as reporting the arithmetic average of F1 values over cross-validation folds is methodologically misguided: we will define the corresponding performance measure that can safely be averaged. 3.1 Baseline A random classifier that predicts positive with probability p has Fβ score (1 + β 2)pπ/(p + β 2π). This is monotonically increasing in p ∈[0,1] hence reaches its maximum for p = 1, the always4 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Gain Precision Gain Figure 2: (left) Conventional PR curve with hyperbolic F1 isometrics (dotted lines) and the baseline performance by the always-positive classifier (solid hyperbole). (right) Precision-Recall-Gain curve with minor diagonal as baseline, parallel F1 isometrics and a convex Pareto front. positive classifier. Hence Precision-Recall analysis differs from classification accuracy in that the baseline to beat is the always-positive classifier rather than any random classifier. This baseline has prec = π and rec = 1, and it is easily seen that any model with prec < π or rec < π loses against this baseline. Hence it makes sense to consider only precision and recall values in the interval [π,1]. Any real-valued variable x ∈[min,max] can be rescaled by the mapping x 7→ x−min max−min. However, the linear scale is inappropriate here and we should use a harmonic scale instead, hence map to 1/x−1/min 1/max−1/min = max·(x−min) (max−min)·x (3) Taking max = 1 and min = π we arrive at the following definition. Definition 1 (Precision Gain and Recall Gain). precG = prec−π (1−π)prec = 1− π 1−π FP TP recG = rec−π (1−π)rec = 1− π 1−π FN TP (4) A Precision-Recall-Gain curve plots Precision Gain on the y-axis against Recall Gain on the x-axis in the unit square (i.e., negative gains are ignored). An example PRG curve is given in Figure 2 (right). The always-positive classifier has recG = 1 and precG = 0 and hence gets plotted in the lower right-hand corner of Precision-Recall-Gain space, regardless of the class distribution. Since we show in the next section that F1 isometrics have slope −1 in this space it follows that all classifiers with baseline F1 performance end up on the minor diagonal in Precision-Recall-Gain space. In contrast, the corresponding F1 isometric in PR space is hyperbolic (Figure 2 (left)) and its exact location depends on the class distribution. 3.2 Linearity and optimality One of the main benefits of PRG space is that it allows linear interpolation. This manifests itself in two ways: any point on a straight line between two endpoints is achievable by random choice between the endpoints (Theorem 1) and Fβ isometrics are straight lines with slope −β 2 (Theorem 2). Theorem 1. Let P1 = (precG1,recG1) and P2 = (precG2,recG2) be points in the Precision-RecallGain space representing the performance of Models 1 and 2 with contingency tables C1 and C2. Then a model with an interpolated contingency table C∗= λC1 + (1 −λ)C2 has precision gain precG∗= µprecG1 + (1 −µ)precG2 and recall gain recG∗= µrecG1 + (1 −µ)recG2, where µ = λTP1/(λTP1 +(1−λ)TP2). Theorem 2. precG+β 2recG = (1+β 2)FGβ, with FGβ = Fβ −π (1−π)Fβ = 1− π 1−π FP+β 2FN (1+β 2)TP . FGβ is a linearised version of Fβ in the same way as precG and recG are linearised versions of precision and recall. FGβ measures the gain in performance (on a linear scale) relative to a classifier with 5 both precision and recall – and hence Fβ – equal to π. F1 isometrics are indicated in Figure 2 (right). By increasing (decreasing) β 2 these lines of constant Fβ become steeper (flatter) and hence we are putting more emphasis on recall (precision). With regard to optimality, we already knew that every classifier or threshold optimal for Fβ for some β 2 is optimal for accc for some c. The reverse also holds, except for the ROC convex hull points below the baseline (e.g., the always-negative classifier). Due to linearity the PRG Pareto front is convex and easily constructed by visual inspection. We will see in Section 3.4 that these segments of the PRG convex hull can be used to obtain classifier scores specifically calibrated for F-scores, thereby pre-empting the need for any more threshold tuning. 3.3 Area Define the area under the Precision-Recall-Gain curve as AUPRG = R 1 0 precG d recG. We will show how this area can be related to an expected FG1 score when averaging over the operating points on the curve in a particular way. To this end we define ∆= recG/π −precG/(1−π), which expresses the extent to which recall exceeds precision (reweighting by π and 1−π guarantees that ∆ is monotonically increasing when changing the threshold towards having more positive predictions, as shown in the proof of Theorem 3 in the Supplementary Material). Hence, −y0/(1−π) ≤∆≤1/π, where y0 denotes the precision gain at the operating point where recall gain is zero. The following theorem shows that if the operating points are chosen such that ∆is uniformly distributed in this range, then the expected FG1 can be calculated from the area under the Precision-Recall-Gain curve (the Supplementary Material proves a more general result for expected FGβ.) This justifies the use of AUPRG as a performance metric without fixing the classifier’s operating point in advance. Theorem 3. Let the operating points of a model with area under the Precision-Recall-Gain curve AUPRG be chosen such that ∆is uniformly distributed within [−y0/1−π,1/π]. Then the expected FG1 score is equal to E[FG1] = AUPRG/2+1/4−π(1−y02)/4 1−π(1−y0) (5) The expected reciprocal F1 score can be calculated from the relationship E[1/F1] = (1 −(1 − π)E[FG1])/π which follows from the definition of FGβ. In the special case where y0 = 1 the expected FG1 score is AUPRG/2+1/4. 3.4 Calibration Figure 3 (left) shows an ROC curve with empirically calibrated posterior probabilities obtained by isotonic regression [19] or the ROC convex hull [4]. Segments of the convex hull are labelled with the value of c for which the two endpoints have the same skew-sensitive accuracy accc. Conversely, if a point connects two segments with c1 < c2 then that point is optimal for any c such that c1 < c < c2. The calibrated values c are derived from the ROC slope r by c = πr/(πr + (1 −π)) [6]. For example, the point on the convex hull two steps up from the origin optimises skew-sensitive accuracy accc for 0.29 < c < 0.75 and hence also standard accuracy (c = 1/2). We are now in a position to calculate similarly calibrated scores for F-score. Theorem 4. Let two classifiers be such that prec1 > prec2 and rec1 < rec2, then these two classifiers have the same Fβ score if and only if β 2 = −1/prec1 −1/prec2 1/rec1 −1/rec2 (6) In line with ROC calibration we convert these slopes into a calibrated score between 0 and 1: d = 1 (β 2 +1) = 1/rec1 −1/rec2 (1/rec1 −1/rec2)−(1/prec1 −1/prec2) (7) It is important to note that there is no model-independent relationship between ROC-calibrated scores and PRG-calibrated scores, so we cannot derive d from c. However, we can equip a model with two calibration maps, one for accuracy and the other for F-score. 6 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 False positive rate True positive rate 1 0.75 0.29 0.18 0.053 0.048 0.0280.014 0 0 0.2 0.4 0.6 0.8 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Gain Precision Gain 0.99 0.76 0.49 0.075 0.067 0.034 0.016 0 Figure 3: (left) ROC curve with scores empirically calibrated for accuracy. The green dots correspond to a regular grid in Precision-Recall-Gain space. (right) Precision-Recall-Gain curve with scores calibrated for Fβ. The green dots correspond to a regular grid in ROC space, clearly indicating that ROC analysis over-emphasises the high-recall region. Figure 3 (right) shows the PRG curve for the running example with scores calibrated for Fβ. Score 0.76 corresponds to β 2 = (1 −0.76)/0.76 = 0.32 and score 0.49 corresponds to β 2 = 1.04, so the point closest to the Precision-Recall breakeven line optimises Fβ for 0.32 < β 2 < 1.04 and hence also F1 (but note that the next point to the right on the convex hull is nearly as good for F1, on account of the connecting line segment having a calibrated score close to 1/2). 4 Practical examples The key message of this paper is that precision, recall and F-score are expressed on a harmonic scale and hence any kind of arithmetic average of these quantities is methodologically wrong. We now demonstrate that this matters in practice. In particular, we show that in some sense, AUPR and AUPRG are as different from each other as AUPR and AUROC. Using the OpenML platform [17] we took all those binary classification tasks which have 10-fold cross-validated predictions using at least 30 models from different learning methods (these are called flows in OpenML). In each of the obtained 886 tasks (covering 426 different datasets) we applied the following procedure. First, we fetched the predicted scores of 30 randomly selected models from different flows and calculated areas under ROC, PRG and PR curves(with hyperbolic interpolation as recommended by [2]), with minority class as positives. We then ranked the 30 models with respect to these measures. Figure 4 plots AUPRG-rank against AUPR-rank across all 25980 models. Figure 4 (left) demonstrates that AUPR and AUPRG often disagree in ranking the models. In particular, they disagree on the best method in 24% of the tasks and on the top three methods in 58% of the tasks (i.e., they agree on top, second and third method in 42% of the tasks). This amount of disagreement is comparable to the disagreement between AUPR and AUROC (29% and 65% disagreement for top 1 and top 3, respectively) and between AUPRG and AUROC (22% and 57%). Therefore, AUPR, AUPRG and AUROC are related quantities, but still all significantly different. The same conclusion is supported by the pairwise correlations between the ranks across all tasks: the correlation between AUPR-ranks and AUPRG-ranks is 0.95, between AUPR and AUROC it is 0.95, and between AUPRG and AUROC it is 0.96. Figure 4 (right) shows AUPRG vs AUPR in two datasets with relatively low and high rank correlations (0.944 and 0.991, selected as lower and upper quartiles among all tasks). In both datasets AUPR and AUPRG agree on the best model. However, in the white-clover dataset the second best is AdaBoost according to AUPRG and Logistic Regression according to AUPR. As seen in Figure 5, this disagreement is caused by AUPR taking into account the poor performance of AdaBoost in the early part of the ranking; AUPRG ignores this part as it has negative recall gain. 7 1 10 20 30 1 10 20 30 Rank of AUPR Rank of AUPRG Count 0 1 2−3 4−7 8−15 16−31 32−63 64−127 128−... ● ● ● ●● ● ● ● ● ●●●● ● ●●● ● ● ● ● ● ● ● ● ● ● ● ●● 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 AUPR AUPRG Dataset ● ada−agnostic white−clover Figure 4: (left) Comparison of AUPRG-ranks vs AUPR-ranks. Each cell shows how many models across 886 OpenML tasks have these ranks among the 30 models in the same task. (right) Comparison of AUPRG vs AUPR in OpenML tasks with IDs 3872 (white-clover) and 3896 (ada-agnostic), with 30 models in each task. Some models perform worse than random (AUPRG < 0) and are not plotted. The models represented by the two encircled triangles are shown in detail in Figure 5. 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 False positive rate True positive rate 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 Recall Precision 0.00 0.25 0.50 0.75 1.00 0.00 0.25 0.50 0.75 1.00 Recall Gain Precision Gain Figure 5: (left) ROC curves for AdaBoost (solid line) and Logistic Regression (dashed line) on the white-clover dataset (OpenML run IDs 145651 and 267741, respectively). (middle) Corresponding PR curves. The solid curve is on average lower with AUPR = 0.724 whereas the dashed curve has AUPR = 0.773. (right) Corresponding PRG curves, where the situation has reversed: the solid curve has AUPRG = 0.714 while the dashed curve has a lower AUPRG of 0.687. 5 Concluding remarks If a practitioner using PR-analysis and the F-score should take one methodological recommendation from this paper, it is to use the F-Gain score instead to make sure baselines are taken into account properly and averaging is done on the appropriate scale. If required the FGβ score can be converted back to an Fβ score at the end. The second recommendation is to use Precision-Recall-Gain curves instead of PR curves, and the third to use AUPRG which is easier to calculate than AUPR due to linear interpolation, has a proper interpretation as an expected F-Gain score and allows performance assessment over a range of operating points. To assist practitioners we have made R, Matlab and Java code to calculate AUPRG and PRG curves available at http://www.cs.bris.ac.uk/ ˜flach/PRGcurves/. We are also working on closer integration of AUPRG as an evaluation metric in OpenML and performance visualisation platforms such as ViperCharts [15]. As future work we mention the interpretation of AUPRG as a measure of ranking performance: we are working on an interpretation which gives non-uniform weights to the positives and as such is related to Discounted Cumulative Gain. A second line of research involves the use of cost curves for the FGβ score and associated threshold choice methods. Acknowledgments This work was supported by the REFRAME project granted by the European Coordinated Research on Long-Term Challenges in Information and Communication Sciences & Technologies ERANet (CHIST-ERA), and funded by the Engineering and Physical Sciences Research Council in the UK under grant EP/K018728/1. Discussions with Hendrik Blockeel helped to clarify the intuitions underlying this work. 8 References [1] K. Boyd, V. S. Costa, J. Davis, and C. D. Page. Unachievable region in precision-recall space and its effect on empirical evaluation. In International Conference on Machine Learning, page 349, 2012. [2] J. Davis and M. Goadrich. The relationship between precision-recall and ROC curves. In Proceedings of the 23rd International Conference on Machine Learning, pages 233–240, 2006. [3] T. Fawcett. An introduction to ROC analysis. Pattern Recognition Letters, 27(8):861–874, 2006. [4] T. Fawcett and A. Niculescu-Mizil. PAV and the ROC convex hull. Machine Learning, 68(1): 97–106, July 2007. [5] P. A. Flach. The geometry of ROC space: understanding machine learning metrics through ROC isometrics. In Machine Learning, Proceedings of the Twentieth International Conference (ICML 2003), pages 194–201, 2003. [6] P. A. Flach. ROC analysis. In C. Sammut and G. Webb, editors, Encyclopedia of Machine Learning, pages 869–875. Springer US, 2010. [7] D. J. Hand and R. J. Till. A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning, 45(2):171–186, 2001. [8] J. Hern´andez-Orallo, P. Flach, and C. Ferri. A unified view of performance metrics: Translating threshold choice into expected classification loss. Journal of Machine Learning Research, 13: 2813–2869, 2012. [9] O. O. Koyejo, N. Natarajan, P. K. Ravikumar, and I. S. Dhillon. Consistent binary classification with generalized performance metrics. In Advances in Neural Information Processing Systems, pages 2744–2752, 2014. [10] Z. C. Lipton, C. Elkan, and B. Naryanaswamy. Optimal thresholding of classifiers to maximize F1 measure. In Machine Learning and Knowledge Discovery in Databases, volume 8725 of Lecture Notes in Computer Science, pages 225–239. Springer Berlin Heidelberg, 2014. [11] H. Narasimhan, R. Vaish, and S. Agarwal. On the statistical consistency of plug-in classifiers for non-decomposable performance measures. In Advances in Neural Information Processing Systems 27, pages 1493–1501. 2014. [12] S. P. Parambath, N. Usunier, and Y. Grandvalet. Optimizing F-measures by cost-sensitive classification. In Advances in Neural Information Processing Systems, pages 2123–2131, 2014. [13] J. C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Advances in Large Margin Classifiers, pages 61–74. MIT Press, Boston, 1999. [14] F. Provost and T. Fawcett. Robust classification for imprecise environments. Machine Learning, 42(3):203–231, 2001. [15] B. Sluban and N. Lavraˇc. Vipercharts: Visual performance evaluation platform. In H. Blockeel, K. Kersting, S. Nijssen, and F. ˇZelezn´y, editors, Machine Learning and Knowledge Discovery in Databases, volume 8190 of Lecture Notes in Computer Science, pages 650–653. Springer Berlin Heidelberg, 2013. [16] C. J. Van Rijsbergen. Information Retrieval. Butterworth-Heinemann, Newton, MA, USA, 2nd edition, 1979. [17] J. Vanschoren, J. N. van Rijn, B. Bischl, and L. Torgo. OpenML: networked science in machine learning. SIGKDD Explorations, 15(2):49–60, 2013. [18] N. Ye, K. M. A. Chai, W. S. Lee, and H. L. Chieu. Optimizing F-measures: A tale of two approaches. In Proceedings of the 29th International Conference on Machine Learning, pages 289–296, 2012. [19] B. Zadrozny and C. Elkan. Obtaining calibrated probability estimates from decision trees and naive Bayesian classifiers. In Proceedings of the Eighteenth International Conference on Machine Learning (ICML 2001), pages 609–616, 2001. [20] M.-J. Zhao, N. Edakunni, A. Pocock, and G. Brown. Beyond Fano’s inequality: bounds on the optimal F-score, BER, and cost-sensitive risk and their implications. The Journal of Machine Learning Research, 14(1):1033–1090, 2013. 9 | 2015 | 89 |
5,982 | Less is More: Nystr¨om Computational Regularization Alessandro Rudi† Raffaello Camoriano†‡ Lorenzo Rosasco†◦ †Universit`a degli Studi di Genova - DIBRIS, Via Dodecaneso 35, Genova, Italy ‡Istituto Italiano di Tecnologia - iCub Facility, Via Morego 30, Genova, Italy ◦Massachusetts Institute of Technology and Istituto Italiano di Tecnologia Laboratory for Computational and Statistical Learning, Cambridge, MA 02139, USA {ale rudi, lrosasco}@mit.edu raffaello.camoriano@iit.it Abstract We study Nystr¨om type subsampling approaches to large scale kernel methods, and prove learning bounds in the statistical learning setting, where random sampling and high probability estimates are considered. In particular, we prove that these approaches can achieve optimal learning bounds, provided the subsampling level is suitably chosen. These results suggest a simple incremental variant of Nystr¨om Kernel Regularized Least Squares, where the subsampling level implements a form of computational regularization, in the sense that it controls at the same time regularization and computations. Extensive experimental analysis shows that the considered approach achieves state of the art performances on benchmark large scale datasets. 1 Introduction Kernel methods provide an elegant and effective framework to develop nonparametric statistical approaches to learning [1]. However, memory requirements make these methods unfeasible when dealing with large datasets. Indeed, this observation has motivated a variety of computational strategies to develop large scale kernel methods [2–8]. In this paper we study subsampling methods, that we broadly refer to as Nystr¨om approaches. These methods replace the empirical kernel matrix, needed by standard kernel methods, with a smaller matrix obtained by (column) subsampling [2, 3]. Such procedures are shown to often dramatically reduce memory/time requirements while preserving good practical performances [9–12]. The goal of our study is two-fold. First, and foremost, we aim at providing a theoretical characterization of the generalization properties of such learning schemes in a statistical learning setting. Second, we wish to understand the role played by the subsampling level both from a statistical and a computational point of view. As discussed in the following, this latter question leads to a natural variant of Kernel Regularized Least Squares (KRLS), where the subsampling level controls both regularization and computations. From a theoretical perspective, the effect of Nystr¨om approaches has been primarily characterized considering the discrepancy between a given empirical kernel matrix and its subsampled version [13–19]. While interesting in their own right, these latter results do not directly yield information on the generalization properties of the obtained algorithm. Results in this direction, albeit suboptimal, were first derived in [20] (see also [21,22]), and more recently in [23,24]. In these latter papers, sharp error analyses in expectation are derived in a fixed design regression setting for a form of Kernel Regularized Least Squares. In particular, in [23] a basic uniform sampling approach is studied, while in [24] a subsampling scheme based on the notion of leverage score is considered. The main technical contribution of our study is an extension of these latter results to the statistical learning setting, where the design is random and high probability estimates are considered. The 1 more general setting makes the analysis considerably more complex. Our main result gives optimal finite sample bounds for both uniform and leverage score based subsampling strategies. These methods are shown to achieve the same (optimal) learning error as kernel regularized least squares, recovered as a special case, while allowing substantial computational gains. Our analysis highlights the interplay between the regularization and subsampling parameters, suggesting that the latter can be used to control simultaneously regularization and computations. This strategy implements a form of computational regularization in the sense that the computational resources are tailored to the generalization properties in the data. This idea is developed considering an incremental strategy to efficiently compute learning solutions for different subsampling levels. The procedure thus obtained, which is a simple variant of classical Nystr¨om Kernel Regularized Least Squares with uniform sampling, allows for efficient model selection and achieves state of the art results on a variety of benchmark large scale datasets. The rest of the paper is organized as follows. In Section 2, we introduce the setting and algorithms we consider. In Section 3, we present our main theoretical contributions. In Section 4, we discuss computational aspects and experimental results. 2 Supervised learning with KRLS and Nystr¨om approaches Let X ×R be a probability space with distribution ρ, where we view X and R as the input and output spaces, respectively. Let ρX denote the marginal distribution of ρ on X and ρ(·|x) the conditional distribution on R given x ∈X. Given a hypothesis space H of measurable functions from X to R, the goal is to minimize the expected risk, min f∈H E(f), E(f) = Z X×R (f(x) −y)2dρ(x, y), (1) provided ρ is known only through a training set of (xi, yi)n i=1 sampled identically and independently according to ρ. A basic example of the above setting is random design regression with the squared loss, in which case yi = f∗(xi) + ϵi, i = 1, . . . , n, (2) with f∗a fixed regression function, ϵ1, . . . , ϵn a sequence of random variables seen as noise, and x1, . . . , xn random inputs. In the following, we consider kernel methods, based on choosing a hypothesis space which is a separable reproducing kernel Hilbert space. The latter is a Hilbert space H of functions, with inner product ⟨·, ·⟩H, such that there exists a function K : X × X →R with the following two properties: 1) for all x ∈X, Kx(·) = K(x, ·) belongs to H, and 2) the so called reproducing property holds: f(x) = ⟨f, Kx⟩H, for all f ∈H, x ∈X [25]. The function K, called reproducing kernel, is easily shown to be symmetric and positive definite, that is the kernel matrix (KN)i,j = K(xi, xj) is positive semidefinite for all x1, . . . , xN ∈X, N ∈N. A classical way to derive an empirical solution to problem (1) is to consider a Tikhonov regularization approach, based on the minimization of the penalized empirical functional, min f∈H 1 n n X i=1 (f(xi) −yi)2 + λ∥f∥2 H, λ > 0. (3) The above approach is referred to as Kernel Regularized Least Squares (KRLS) or Kernel Ridge Regression (KRR). It is easy to see that a solution ˆfλ to problem (3) exists, it is unique and the representer theorem [1] shows that it can be written as ˆfλ(x) = n X i=1 ˆαiK(xi, x) with ˆα = (Kn + λnI)−1y, (4) where x1, . . . , xn are the training set points, y = (y1, . . . , yn) and Kn is the empirical kernel matrix. Note that this result implies that we can restrict the minimization in (3) to the space, Hn = {f ∈H | f = n X i=1 αiK(xi, ·), α1, . . . , αn ∈R}. Storing the kernel matrix Kn, and solving the linear system in (4), can become computationally unfeasible as n increases. In the following, we consider strategies to find more efficient solutions, 2 based on the idea of replacing Hn with Hm = {f | f = m X i=1 αiK(˜xi, ·), α ∈Rm}, where m ≤n and {˜x1, . . . , ˜xm} is a subset of the input points in the training set. The solution ˆfλ,m of the corresponding minimization problem can now be written as, ˆfλ,m(x) = m X i=1 ˜αiK(˜xi, x) with ˜α = (K⊤ nmKnm + λnKmm)†K⊤ nmy, (5) where A† denotes the Moore-Penrose pseudoinverse of a matrix A, and (Knm)ij = K(xi, ˜xj), (Kmm)kj = K(˜xk, ˜xj) with i ∈{1, . . . , n} and j, k ∈{1, . . . , m} [2]. The above approach is related to Nystr¨om methods and different approximation strategies correspond to different ways to select the inputs subset. While our framework applies to a broader class of strategies, see Section C.1, in the following we primarily consider two techniques. Plain Nystr¨om. The points {˜x1, . . . , ˜xm} are sampled uniformly at random without replacement from the training set. Approximate leverage scores (ALS) Nystr¨om. Recall that the leverage scores associated to the training set points x1, . . . , xn are (li(t))n i=1, li(t) = (Kn(Kn + tnI)−1)ii, i ∈{1, . . . , n} (6) for any t > 0, where (Kn)ij = K(xi, xj). In practice, leverage scores are onerous to compute and approximations (ˆli(t))n i=1 can be considered [16,17,24] . In particular, in the following we are interested in suitable approximations defined as follows: Definition 1 (T-approximate leverage scores). Let (li(t))n i=1 be the leverage scores associated to the training set for a given t. Let δ > 0, t0 > 0 and T ≥1. We say that (bli(t))n i=1 are T-approximate leverage scores with confidence δ, when with probability at least 1 −δ, 1 T li(t) ≤bli(t) ≤Tli(t) ∀i ∈{1, . . . , n}, t ≥t0. Given T-approximate leverage scores for t > λ0, {˜x1, . . . , ˜xm} are sampled from the training set independently with replacement, and with probability to be selected given by Pt(i) = ˆli(t)/ P j ˆlj(t). In the next section, we state and discuss our main result showing that the KRLS formulation based on plain or approximate leverage scores Nystr¨om provides optimal empirical solutions to problem (1). 3 Theoretical analysis In this section, we state and discuss our main results. We need several assumptions. The first basic assumption is that problem (1) admits at least a solution. Assumption 1. There exists an fH ∈H such that E(fH) = min f∈H E(f). Note that, while the minimizer might not be unique, our results apply to the case in which fH is the unique minimizer with minimal norm. Also, note that the above condition is weaker than assuming the regression function in (2) to belong to H. Finally, we note that the study of the paper can be adapted to the case in which minimizers do not exist, but the analysis is considerably more involved and left to a longer version of the paper. The second assumption is a basic condition on the probability distribution. Assumption 2. Let zx be the random variable zx = y −fH(x), with x ∈X, and y distributed according to ρ(y|x). Then, there exists M, σ > 0 such that E|zx|p ≤1 2p!M p−2σ2 for any p ≥2, almost everywhere on X. The above assumption is needed to control random quantities and is related to a noise assumption in the regression model (2). It is clearly weaker than the often considered bounded output assumption 3 [25], and trivially verified in classification. The last two assumptions describe the capacity (roughly speaking the “size”) of the hypothesis space induced by K with respect to ρ and the regularity of fH with respect to K and ρ. To discuss them, we first need the following definition. Definition 2 (Covariance operator and effective dimensions). We define the covariance operator as C : H →H, ⟨f, Cg⟩H = Z X f(x)g(x)dρX(x) , ∀f, g ∈H. Moreover, for λ > 0, we define the random variable Nx(λ) = Kx, (C + λI)−1Kx H with x ∈X distributed according to ρX and let N(λ) = E Nx(λ), N∞(λ) = sup x∈X Nx(λ). We add several comments. Note that C corresponds to the second moment operator, but we refer to it as the covariance operator with an abuse of terminology. Moreover, note that N(λ) = Tr(C(C + λI)−1) (see [26]). This latter quantity, called effective dimension or degrees of freedom, can be seen as a measure of the capacity of the hypothesis space. The quantity N∞(λ) can be seen to provide a uniform bound on the leverage scores in Eq. (6). Clearly, N(λ) ≤N∞(λ) for all λ > 0. Assumption 3. The kernel K is measurable, C is bounded. Moreover, for all λ > 0 and a Q > 0, N∞(λ) < ∞, (7) N(λ) ≤Qλ−γ, 0 < γ ≤1. (8) Measurability of K and boundedness of C are minimal conditions to ensure that the covariance operator is a well defined linear, continuous, self-adjoint, positive operator [25]. Condition (7) is satisfied if the kernel is bounded supx∈X K(x, x) = κ2 < ∞, indeed in this case N∞(λ) ≤κ2/λ for all λ > 0. Conversely, it can be seen that condition (7) together with boundedness of C imply that the kernel is bounded, indeed 1 κ2 ≤2∥C∥N∞(∥C∥). Boundedness of the kernel implies in particular that the operator C is trace class and allows to use tools from spectral theory. Condition (8) quantifies the capacity assumption and is related to covering/entropy number conditions (see [25] for further details). In particular, it is known that condition (8) is ensured if the eigenvalues (σi)i of C satisfy a polynomial decaying condition σi ∼ i−1 γ . Note that, since the operator C is trace class, Condition (8) always holds for γ = 1. Here, for space constraints and in the interest of clarity we restrict to such a polynomial condition, but the analysis directly applies to other conditions including exponential decay or a finite rank conditions [26]. Finally, we have the following regularity assumption. Assumption 4. There exists s ≥0, 1 ≤R < ∞, such that ∥C−sfH∥H < R. The above condition is fairly standard, and can be equivalently formulated in terms of classical concepts in approximation theory such as interpolation spaces [25]. Intuitively, it quantifies the degree to which fH can be well approximated by functions in the RKHS H and allows to control the bias/approximation error of a learning solution. For s = 0, it is always satisfied. For larger s, we are assuming fH to belong to subspaces of H that are the images of the fractional compact operators Cs. Such spaces contain functions which, expanded on a basis of eigenfunctions of C, have larger coefficients in correspondence to large eigenvalues. Such an assumption is natural in view of using techniques such as (4), which can be seen as a form of spectral filtering, that estimate stable solutions by discarding the contribution of small eigenvalues [27]. In the next section, we are going to quantify the quality of empirical solutions of Problem (1) obtained by schemes of the form (5), in terms of the quantities in Assumptions 2, 3, 4. 1If N∞(λ) is finite, then N∞(∥C∥) = supx∈X∥(C + ∥C∥I)−1Kx∥2 ≥1/2∥C∥−1supx∈X∥Kx∥2, therefore K(x, x) ≤2∥C∥N∞(∥C∥). 4 3.1 Main results In this section, we state and discuss our main results, starting with optimal finite sample error bounds for regularized least squares based on plain and approximate leverage score based Nystr¨om subsampling. Theorem 1. Under Assumptions 1, 2, 3, and 4, let δ > 0, v = min(s, 1/2), p = 1 + 1/(2v + γ) and assume n ≥1655κ2 + 223κ2 log 6κ2 δ + 38p ∥C∥log 114κ2p ∥C∥δ p . Then, the following inequality holds with probability at least 1 −δ, E( ˆfλ,m) −E(fH) ≤q2 n− 2v+1 2v+γ+1 , with q = 6R 2∥C∥+ Mκ p ∥C∥ + s Qσ2 ∥C∥γ ! log 6 δ , (9) with ˆfλ,m as in (5), λ = ∥C∥n− 1 2v+γ+1 and 1. for plain Nystr¨om m ≥(67 ∨5N∞(λ)) log 12κ2 λδ ; 2. for ALS Nystr¨om and T-approximate leverage scores with subsampling probabilities Pλ, t0 ≥19κ2 n log 12n δ and m ≥(334 ∨78T 2N(λ)) log 48n δ . We add several comments. First, the above results can be shown to be optimal in a minimax sense. Indeed, minimax lower bounds proved in [26, 28] show that the learning rate in (9) is optimal under the considered assumptions (see Thm. 2, 3 of [26], for a discussion on minimax lower bounds see Sec. 2 of [26]). Second, the obtained bounds can be compared to those obtained for other regularized learning techniques. Techniques known to achieve optimal error rates include Tikhonov regularization [26, 28, 29], iterative regularization by early stopping [30, 31], spectral cut-off regularization (a.k.a. principal component regression or truncated SVD) [30,31], as well as regularized stochastic gradient methods [32]. All these techniques are essentially equivalent from a statistical point of view and differ only in the required computations. For example, iterative methods allow for a computation of solutions corresponding to different regularization levels which is more efficient than Tikhonov or SVD based approaches. The key observation is that all these methods have the same O(n2) memory requirement. In this view, our results show that randomized subsampling methods can break such a memory barrier, and consequently achieve much better time complexity, while preserving optimal learning guarantees. Finally, we can compare our results with previous analysis of randomized kernel methods. As already mentioned, results close to those in Theorem 1 are given in [23, 24] in a fixed design setting. Our results extend and generalize the conclusions of these papers to a general statistical learning setting. Relevant results are given in [8] for a different approach, based on averaging KRLS solutions obtained splitting the data in m groups (divide and conquer RLS). The analysis in [8] is only in expectation, but considers random design and shows that the proposed method is indeed optimal provided the number of splits is chosen depending on the effective dimension N(λ). This is the only other work we are aware of establishing optimal learning rates for randomized kernel approaches in a statistical learning setting. In comparison with Nystr¨om computational regularization the main disadvantage of the divide and conquer approach is computational and in the model selection phase where solutions corresponding to different regularization parameters and number of splits usually need to be computed. The proof of Theorem 1 is fairly technical and lengthy. It incorporates ideas from [26] and techniques developed to study spectral filtering regularization [30, 33]. In the next section, we briefly sketch some main ideas and discuss how they suggest an interesting perspective on regularization techniques including subsampling. 3.2 Proof sketch and a computational regularization perspective A key step in the proof of Theorem 1 is an error decomposition, and corresponding bound, for any fixed λ and m. Indeed, it is proved in Theorem 2 and Proposition 2 that, for δ > 0, with probability 5 m 200 400 600 800 1000 λ 10 -6 10 -4 10 -2 10 0 RMSE 0.032 0.0325 0.033 0.0335 0.034 0.0345 0.035 m 50 100 150 200 250 300 λ 10 -12 10 -10 10 -8 10 -6 10 -4 Classification Error 0.04 0.05 0.06 0.07 0.08 0.09 0.1 m 1000 2000 3000 4000 5000 λ 10 -15 10 -14 10 -13 10 -12 RMSE 15 20 25 Figure 1: Validation errors associated to 20 × 20 grids of values for m (x axis) and λ (y axis) on pumadyn32nh (left), breast cancer (center) and cpuSmall (right). at least 1 −δ, E( ˆfλ,m) −E(fH) 1/2 ≲R M p N∞(λ) n + r σ2N(λ) n ! log 6 δ + RC(m)1/2+v + Rλ1/2+v. (10) The first and last term in the right hand side of the above inequality can be seen as forms of sample and approximation errors [25] and are studied in Lemma 4 and Theorem 2. The mid term can be seen as a computational error and depends on the considered subsampling scheme. Indeed, it is shown in Proposition 2 that C(m) can be taken as, Cpl(m) = min t > 0 (67 ∨5N∞(t)) log 12κ2 tδ ≤m , for the plain Nystr¨om approach, and CALS(m) = min 19κ2 n log 12n δ ≤t ≤∥C∥ 78T 2N(t) log 48n δ ≤m , for the approximate leverage scores approach. The bounds in Theorem 1 follow by: 1) minimizing in λ the sum of the first and third term 2) choosing m so that the computational error is of the same order of the other terms. Computational resources and regularization are then tailored to the generalization properties of the data at hand. We add a few comments. First, note that the error bound in (10) holds for a large class of subsampling schemes, as discussed in Section C.1 in the appendix. Then specific error bounds can be derived developing computational error estimates. Second, the error bounds in Theorem 2 and Proposition 2, and hence in Theorem 1, easily generalize to a larger class of regularization schemes beyond Tikhonov approaches, namely spectral filtering [30]. For space constraints, these extensions are deferred to a longer version of the paper. Third, we note that, in practice, optimal data driven parameter choices, e.g. based on hold-out estimates [31], can be used to adaptively achieve optimal learning bounds. Finally, we observe that a different perspective is derived starting from inequality (10), and noting that the role played by m and λ can also be exchanged. Letting m play the role of a regularization parameter, λ can be set as a function of m and m tuned adaptively. For example, in the case of a plain Nystr¨om approach, if we set λ = log m m , and m = 3n 1 2v+γ+1 log n, then the obtained learning solution achieves the error bound in Eq. (9). As above, the subsampling level can also be chosen by cross-validation. Interestingly, in this case by tuning m we naturally control computational resources and regularization. An advantage of this latter parameterization is that, as described in the following, the solution corresponding to different subsampling levels is easy to update using Cholesky rank-one update formulas [34]. As discussed in the next section, in practice, a joint tuning over m and λ can be done starting from small m and appears to be advantageous both for error and computational performances. 4 Incremental updates and experimental analysis In this section, we first describe an incremental strategy to efficiently explore different subsampling levels and then perform extensive empirical tests aimed in particular at: 1) investigating the statistical and computational benefits of considering varying subsampling levels, and 2) compare the 6 Input: Dataset (xi, yi)n i=1, Subsampling (˜xj)m j=1, Regularization Parameter λ. Output: Nystr¨om KRLS estimators {˜α1, . . . , ˜αm}. Compute γ1; R1 ←√γ1; for t ∈{2, . . . , m} do Compute At, ut, vt; Rt ← Rt−1 0 0 0 ; Rt ←cholup(Rt, ut,′ +′); Rt ←cholup(Rt, vt,′ −′); ˜αt ←R−1 t (R−⊤ t (Aty)); end for Algorithm 1: Incremental Nystr¨om KRLS. m 1 201 401 600 800 1000 Time (s) 0 20 40 60 80 100 120 Incremental Nyström Batch Nyström Figure 2: Model selection time on the cpuSmall dataset. m ∈[1, 1000] and T = 50, 10 repetitions. performance of the algorithm with respect to state of the art solutions on several large scale benchmark datasets. Throughout this section, we only consider a plain Nystr¨om approach, deferring to future work the analysis of leverage scores based sampling techniques. Interestingly, we will see that such a basic approach can often provide state of the art performances. 4.1 Efficient incremental updates Algorithm 1 efficiently compute solutions corresponding to different subsampling levels, by exploiting rank-one Cholesky updates [34]. The proposed procedure allows to efficiently compute a whole regularization path of solutions, and hence perform fast model selection2 (see Sect. A). In Algorithm 1, the function cholup is the Cholesky rank-one update formula available in many linear algebra libraries. The total cost of the algorithm is O(nm2 + m3) time to compute ˜α2, . . . , ˜αm, while a naive non-incremental algorithm would require O(nm2T + m3T) with T is the number of analyzed subsampling levels. The following are some quantities needed by the algorithm: A1 = a1 and At = (At−1 at) ∈Rn×t, for any 2 ≤t ≤m. Moreover, for any 1 ≤t ≤m, gt = √1 + γt and ut = (ct/(1 + gt), gt), at = (K(˜xt, x1), . . . , K(˜xt, xn)), ct = A⊤ t−1at + λnbt, vt = (ct/(1 + gt), −1), bt = (K(˜xt, ˜x1), . . . , K(˜xt, ˜xt−1)), γt = a⊤ t at + λnK(˜xt, ˜xt). 4.2 Experimental analysis We empirically study the properties of Algorithm 1, considering a Gaussian kernel of width σ. The selected datasets are already divided in a training and a test part3. We randomly split the training part in a training set and a validation set (80% and 20% of the n training points, respectively) for parameter tuning via cross-validation. The m subsampled points for Nystr¨om approximation are selected uniformly at random from the training set. We report the performance of the selected model on the fixed test set, repeating the process for several trials. Interplay between λ and m. We begin with a set of results showing that incrementally exploring different subsampling levels can yield very good performance while substantially reducing the computational requirements. We consider the pumadyn32nh (n = 8192, d = 32), the breast cancer (n = 569, d = 30), and the cpuSmall (n = 8192, d = 12) datasets4. In Figure 1, we report the validation errors associated to a 20 × 20 grid of values for λ and m. The λ values are logarithmically spaced, while the m values are linearly spaced. The ranges and kernel bandwidths, chosen according to preliminary tests on the data, are σ = 2.66, λ ∈ 10−7, 1 , m ∈[10, 1000] for pumadyn32nh, σ = 0.9, λ ∈ 10−12, 10−3 , m ∈[5, 300] for breast cancer, and σ = 0.1, λ ∈ 10−15, 10−12 , m ∈[100, 5000] for cpuSmall. The main observation that can be derived from this first series of tests is that a small m is sufficient to obtain the same results achieved with the largest m. For example, for pumadyn32nh it is sufficient to choose m = 62 and λ = 10−7 to obtain an average test RMSE of 0.33 over 10 trials, which is the same as the one obtained using m = 1000 and λ = 10−3, with a 3-fold speedup of the joint training and validation phase. Also, it is interesting to observe that for given values of λ, large values of m can decrease the performance. This observation is consistent with the results in Section 3.1, showing that m can play the 2The code for Algorithm 1 is available at lcsl.github.io/NystromCoRe. 3In the following we denote by n the total number of points and by d the number of dimensions. 4www.cs.toronto.edu/˜delve and archive.ics.uci.edu/ml/datasets 7 Table 1: Test RMSE comparison for exact and approximated kernel methods. The results for KRLS, Batch Nystr¨om, RF and Fastfood are the ones reported in [6]. ntr is the size of the training set. Dataset ntr d Incremental KRLS Batch RF Fastfood Fastfood KRLS Fastfood Nystr¨om RBF RBF Nystr¨om RBF RBF RBF FFT Matern Matern Insurance Company 5822 85 0.23180 ± 4 × 10−5 0.231 0.232 0.266 0.264 0.266 0.234 0.235 CPU 6554 21 2.8466 ± 0.0497 7.271 6.758 7.103 7.366 4.544 4.345 4.211 CT slices (axial) 42800 384 7.1106 ± 0.0772 NA 60.683 49.491 43.858 58.425 NA 14.868 Year Prediction MSD 463715 90 0.10470 ± 5 × 10−5 NA 0.113 0.123 0.115 0.106 NA 0.116 Forest 522910 54 0.9638 ± 0.0186 NA 0.837 0.840 0.840 0.838 NA 0.976 role of a regularization parameter. Similar results are obtained for breast cancer, where for λ = 4.28 × 10−6 and m = 300 we obtain a 1.24% average classification error on the test set over 20 trials, while for λ = 10−12 and m = 67 we obtain 1.86%. For cpuSmall, with m = 5000 and λ = 10−12 the average test RMSE over 5 trials is 12.2, while for m = 2679 and λ = 10−15 it is only slightly higher, 13.3, but computing its associated solution requires less than half of the time and approximately half of the memory. Regularization path computation. If the subsampling level m is used as a regularization parameter, the computation of a regularization path corresponding to different subsampling levels becomes crucial during the model selection phase. A naive approach, that consists in recomputing the solutions of Eq. 5 for each subsampling level, would require O(m2nT + m3LT) computational time, where T is the number of solutions with different subsampling levels to be evaluated and L is the number of Tikhonov regularization parameters. On the other hand, by using the incremental Nystr¨om algorithm the model selection time complexity is O(m2n + m3L) for the whole regularization path. We experimentally verify this speedup on cpuSmall with 10 repetitions, setting m ∈[1, 5000] and T = 50. The model selection times, measured on a server with 12 × 2.10GHz Intel® Xeon® E5-2620 v2 CPUs and 132 GB of RAM, are reported in Figure 2. The result clearly confirms the beneficial effects of incremental Nystr¨om model selection on the computational time. Predictive performance comparison. Finally, we consider the performance of the algorithm on several large scale benchmark datasets considered in [6], see Table 1. σ has been chosen on the basis of preliminary data analysis. m and λ have been chosen by cross-validation, starting from small subsampling values up to mmax = 2048, and considering λ ∈ 10−12, 1 . After model selection, we retrain the best model on the entire training set and compute the RMSE on the test set. We consider 10 trials, reporting the performance mean and standard deviation. The results in Table 1 compare Nystr¨om computational regularization with the following methods (as in [6]): • Kernel Regularized Least Squares (KRLS): Not compatible with large datasets. • Random Fourier features (RF): As in [4], with a number of random features D = 2048. • Fastfood RBF, FFT and Matern kernel: As in [6], with D = 2048 random features. • Batch Nystr¨om: Nystr¨om method [3] with uniform sampling and m = 2048. The above results show that the proposed incremental Nystr¨om approach behaves really well, matching state of the art predictive performances. Acknowledgments The work described in this paper is supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216; and by FIRB project RBFR12M3AC, funded by the Italian Ministry of Education, University and Research. References [1] Bernhard Sch¨olkopf and Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond (Adaptive Computation and Machine Learning). MIT Press, 2002. [2] Alex J. Smola and Bernhard Sch¨olkopf. Sparse Greedy Matrix Approximation for Machine Learning. In ICML, pages 911–918. Morgan Kaufmann, 2000. [3] C. Williams and M. Seeger. Using the Nystr¨om Method to Speed Up Kernel Machines. In NIPS, pages 682–688. MIT Press, 2000. [4] Ali Rahimi and Benjamin Recht. Random Features for Large-Scale Kernel Machines. In NIPS, pages 1177–1184. Curran Associates, Inc., 2007. 8 [5] J. Yang, V. Sindhwani, H. Avron, and M. W. Mahoney. Quasi-Monte Carlo Feature Maps for ShiftInvariant Kernels. In ICML, volume 32 of JMLR Proceedings, pages 485–493. JMLR.org, 2014. [6] Quoc V. Le, Tam´as Sarl´os, and Alexander J. Smola. Fastfood - Computing Hilbert Space Expansions in loglinear time. In ICML, volume 28 of JMLR Proceedings, pages 244–252. JMLR.org, 2013. [7] Si Si, Cho-Jui Hsieh, and Inderjit S. Dhillon. Memory Efficient Kernel Approximation. In ICML, volume 32 of JMLR Proceedings, pages 701–709. JMLR.org, 2014. [8] Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Divide and Conquer Kernel Ridge Regression. In COLT, volume 30 of JMLR Proceedings, pages 592–617. JMLR.org, 2013. [9] S. Kumar, M. Mohri, and A. Talwalkar. Ensemble Nystrom Method. In NIPS, pages 1060–1068, 2009. [10] Mu Li, James T. Kwok, and Bao-Liang Lu. Making Large-Scale Nystr¨om Approximation Possible. In ICML, pages 631–638. Omnipress, 2010. [11] Kai Zhang, Ivor W. Tsang, and James T. Kwok. Improved Nystr¨om Low-rank Approximation and Error Analysis. ICML, pages 1232–1239. ACM, 2008. [12] Bo Dai, Bo Xie 0002, Niao He, Yingyu Liang, Anant Raj, Maria-Florina Balcan, and Le Song. Scalable Kernel Methods via Doubly Stochastic Gradients. In NIPS, pages 3041–3049, 2014. [13] Petros Drineas and Michael W. Mahoney. On the Nystr¨om Method for Approximating a Gram Matrix for Improved Kernel-Based Learning. JMLR, 6:2153–2175, December 2005. [14] A. Gittens and M. W. Mahoney. Revisiting the Nystrom method for improved large-scale machine learning. 28:567–575, 2013. [15] Shusen Wang and Zhihua Zhang. Improving CUR Matrix Decomposition and the Nystr¨om Approximation via Adaptive Sampling. JMLR, 14(1):2729–2769, 2013. [16] Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, and David P. Woodruff. Fast approximation of matrix coherence and statistical leverage. JMLR, 13:3475–3506, 2012. [17] Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, and Aaron Sidford. Uniform Sampling for Matrix Approximation. In ITCS, pages 181–190. ACM, 2015. [18] Shusen Wang and Zhihua Zhang. Efficient Algorithms and Error Analysis for the Modified Nystrom Method. In AISTATS, volume 33 of JMLR Proceedings, pages 996–1004. JMLR.org, 2014. [19] S. Kumar, M. Mohri, and A. Talwalkar. Sampling methods for the Nystr¨om method. JMLR, 13(1):981– 1006, 2012. [20] Corinna Cortes, Mehryar Mohri, and Ameet Talwalkar. On the Impact of Kernel Approximation on Learning Accuracy. In AISTATS, volume 9 of JMLR Proceedings, pages 113–120. JMLR.org, 2010. [21] R Jin, T. Yang, M. Mahdavi, Y. Li, and Z. Zhou. Improved Bounds for the Nystr¨om Method With Application to Kernel Classification. Information Theory, IEEE Transactions on, 59(10), Oct 2013. [22] Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nystr¨om Method vs Random Fourier Features: A Theoretical and Empirical Comparison. In NIPS, pages 485–493, 2012. [23] Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In COLT, volume 30, 2013. [24] A. Alaoui and M. W. Mahoney. Fast randomized kernel methods with statistical guarantees. arXiv, 2014. [25] I. Steinwart and A. Christmann. Support Vector Machines. Springer New York, 2008. [26] Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331–368, 2007. [27] L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, and Alessandro Verri. Spectral Algorithms for Supervised Learning. Neural Computation, 20(7):1873–1897, 2008. [28] I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT, 2009. [29] S. Mendelson and J. Neeman. Regularization in kernel learning. The Annals of Statistics, 38(1), 2010. [30] F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. Journal of complexity, 23(1):52–72, 2007. [31] A. Caponnetto and Yuan Yao. Adaptive rates for regularization operators in learning theory. Analysis and Applications, 08, 2010. [32] Y. Ying and M. Pontil. Online gradient descent learning algorithms. Foundations of Computational Mathematics, 8(5):561–596, 2008. [33] Alessandro Rudi, Guillermo D. Canas, and Lorenzo Rosasco. On the Sample Complexity of Subspace Learning. In NIPS, pages 2067–2075, 2013. [34] Gene H. Golub and Charles F. Van Loan. Matrix computations, volume 3. JHU Press, 2012. 9 | 2015 | 9 |
5,983 | Planar Ultrametrics for Image Segmentation Julian Yarkony Experian Data Lab San Diego, CA 92130 julian.yarkony@experian.com Charless C. Fowlkes Department of Computer Science University of California Irvine fowlkes@ics.uci.edu Abstract We study the problem of hierarchical clustering on planar graphs. We formulate this in terms of finding the closest ultrametric to a specified set of distances and solve it using an LP relaxation that leverages minimum cost perfect matching as a subroutine to efficiently explore the space of planar partitions. We apply our algorithm to the problem of hierarchical image segmentation. 1 Introduction We formulate hierarchical image segmentation from the perspective of estimating an ultrametric distance over the set of image pixels that agrees closely with an input set of noisy pairwise distances. An ultrametric space replaces the usual triangle inequality with the ultrametric inequality d(u, v) ≤ max{d(u, w), d(v, w)} which captures the transitive property of clustering (if u and w are in the same cluster and v and w are in the same cluster, then u and v must also be in the same cluster). Thresholding an ultrametric immediately yields a partition into sets whose diameter is less than the given threshold. Varying this distance threshold naturally produces a hierarchical clustering in which clusters at high thresholds are composed of clusters at lower thresholds. Inspired by the approach of [1], our method represents an ultrametric explicitly as a hierarchical collection of segmentations. Determining the appropriate segmentation at a single distance threshold is equivalent to finding a minimum-weight multicut in a graph with both positive and negative edge weights [3, 14, 2, 11, 20, 21, 4, 19, 7]. Finding an ultrametric imposes the additional constraint that these multicuts are hierarchically consistent across different thresholds. We focus on the case where the input distances are specified by a planar graph. This arises naturally in the domain of image segmentation where elements are pixels or superpixels and distances are defined between neighbors and allows us to exploit fast combinatorial algorithms for partitioning planar graphs that yield tighter LP relaxations than the local polytope relaxation often used in graphical inference [20]. The paper is organized as follows. We first introduce the closest ultrametric problem and the relation between multicuts and ultrametrics. We then describe an LP relaxation that uses a delayed column generation approach and exploits planarity to efficiently find cuts via the classic reduction to minimum-weight perfect matching [13, 8, 9, 10]. We apply our algorithm to the task of natural image segmentation and demonstrate that our algorithm converges rapidly and produces optimal or near-optimal solutions in practice. 2 Closest Ultrametric and Multicuts Let G = (V, E) be a weighted graph with non-negative edge weights θ indexed by edges e = (u, v) ∈E. Our goal is to find an ultrametric distance d(u,v) over vertices of the graph that is close to θ in the sense that the distortion P (u,v)∈E ∥θ(u,v) −d(u,v)∥2 2 is minimized. We begin by reformulating this closest ultrametric problem in terms of finding a set of nested multicuts in a family of weighted graphs. 1 We specify a partitioning or multicut of the vertices of the graph G into components using a binary vector ¯X ∈{0, 1}|E| where ¯Xe = 1 indicates that the edge e = (u, v) is “cut” and that the vertices u and v associated with the edge are in separate components of the partition. We use MCUT(G) to denote the set of binary indicator vectors ¯X that represent valid multicuts of the graph G. For notational simplicity, in the remainder of the paper we frequently omit the dependence on G which is given as a fixed input. A necessary and sufficient condition for an indicator vector ¯X to define a valid multicut in G is that for every cycle of edges, if one edge on the cycle is cut then at least one other edge in the cycle must also be cut. Let C denote the set of all cycles in G where each cycle c ∈C is a set of edges and c −ˆe is the set of edges in cycle c excluding edge ˆe. We can express MCUT in terms of these cycle inequalities as: MCUT = ( ¯X ∈{0, 1}|E| : X e∈c−ˆe ¯Xe ≥¯Xˆe, ∀c ∈C, ˆe ∈c ) (1) A hierarchical clustering of a graph can be described by a nested collection of multicuts. We denote the space of valid hierarchical partitions with L layers by ¯ΩL which we represent by a set of L edge-indicator vectors X = ( ¯X1, ¯X2, ¯X3, . . . , ¯XL) in which any cut edge remains cut at all finer layers of the hierarchy. ¯ΩL = {( ¯X1, ¯X2, . . . ¯XL) : ¯Xl ∈MCUT, ¯Xl ≥¯Xl+1 ∀l} (2) Given a valid hierarchical clustering X, an ultrametric d can be specified over the vertices of the graph by choosing a sequence of real values 0 = δ0 < δ1 < δ2 < . . . < δL that indicate a distance threshold associated with each level l of the hierarchical clustering. The ultrametric distance d specified by the pair (X, δ) assigns a distance to each pair of vertices d(u,v) based on the coarsest level of the clustering at which they remain in separate clusters. For pairs corresponding to an edge in the graph (u, v) = e ∈E we can write this explicitly in terms of the multicut indicator vectors as: de = max l∈{0,1,...,L} δl ¯Xl e = L X l=0 δl[ ¯Xl e > ¯Xl+1 e ] (3) We assume by convention that ¯X0 e = 1 and ¯XL+1 e = 0. Pairs (u, v) that do not correspond to an edge in the original graph can still be assigned a unique distance based on the coarsest level l at which they lie in different connected components of the cut specified by Xl. To compute the quality of an ultrametric d with respect to an input set of edge weights θ, we measure the squared L2 difference between the edge weights and the ultrametric distance ∥θ −d∥2 2. To write this compactly in terms of multicut indicator vectors, we construct a set of weights for each edge and layer, denoted θl e so that Pm l=0 θl e = ∥θe −δm∥2. These weights are given explicitly by the telescoping series: θ0 e = ∥θe∥2 θl e = ∥θe −δl∥2 −∥θe −δl−1∥2 ∀l > 1 (4) We use θl ∈R|E| to denote the vector containing θl e for all e ∈E. For a fixed number of levels L and fixed set of thresholds δ, the problem of finding the closest ultrametric d can then be written as an integer linear program (ILP) over the edge cut indicators. min X∈¯ΩL X e∈E
θe − L X l=0 δl[ ¯Xl e > ¯Xl+1 e ]
2 = min X∈¯ΩL X e∈E L X l=0 ∥θe −δl∥2( ¯Xl e −¯Xl+1 e ) (5) = min X∈¯ΩL X e∈E ∥θe∥2 ¯X0 e + L X l=1 ∥θe −δl∥2 −∥θe −δl−1∥2 ¯Xl e + ∥θe −δL∥2 ¯XL+1 e ! = min X∈¯ΩL L X l=0 X e∈E θl e ¯Xl e = min X∈¯ΩL L X l=0 θl · ¯Xl (6) This optimization corresponds to solving a collection of minimum-weight multicut problems where the multicuts are constrained to be hierarchically consistent. 2 (a) Linear combination of cut vectors (b) Hierarchical cuts Figure 1: (a) Any partitioning X can be represented as a linear superposition of cuts Z where each cut isolates a connected component of the partition and is assigned a weight γ = 1 2 [20]. By introducing an auxiliary slack variables β, we are able to represent a larger set of valid indicator vectors X using fewer columns of Z. (b) By introducing additional slack variables at each layer of the hierarchical segmentation, we can efficiently represent many hierarchical segmentations (here {X1, X2, X3}) that are consistent from layer to layer while using only a small number of cut indicators as columns of Z. Computing minimum-weight multicuts (also known as correlation clustering) is NP hard even in the case of planar graphs [6]. A direct approach to finding an approximate solution to Eq 6 is to relax the integrality constraints on ¯Xl and instead optimize over the whole polytope defined by the set of cycle inequalities. We use ΩL to denote the corresponding relaxation of ¯ΩL. While the resulting polytope is not the convex hull of MCUT, the integral vertices do correspond exactly to the set of valid multicuts [12]. In practice, we found that applying a straightforward cutting-plane approach that successively adds violated cycle inequalities to this relaxation of Eq 6 requires far too many constraints and is too slow to be useful. Instead, we develop a column generation approach tailored for planar graphs that allows for efficient and accurate approximate inference. 3 The Cut Cone and Planar Multicuts Consider a partition of a planar graph into two disjoint sets of nodes. We denote the space of indicator vectors corresponding to such two-way cuts by CUT. A cut may yield more than two connected components but it can not produce every possible multicut (e.g., it can not split a triangle of three nodes into three separate components). Let Z ∈{0, 1}|E|×|CUT| be an indicator matrix where each column specifies a valid two-way cut with Zek = 1 if and only if edge e is cut in twoway cut k. The indicator vector of any multicut in a planar graph can be generated by a suitable linear combination of of cuts (columns of Z) that isolate the individual components from the rest of the graph where the weight of each such cut is 1 2. Let γ ∈R|CUT| be a vector specifying a positive weighted combination of cuts. The set CUT△= {Zγ : γ ≥0} is the conic hull of CUT or “cut cone”. Since any multicut can be expressed as a superposition of cuts, the cut cone is identical to the conic hull of MCUT. This equivalence suggests an LP relaxation of the minimum-cost multicut given by min γ≥0 θ · Zγ s.t. Zγ ≤1 (7) where the vector θ ∈R|E| specifies the edge weights. For the case of planar graphs, any solution to this LP relaxation satisfies the cycle inequalities (see supplement and [12, 18, 10]). Expanded Multicut Objective: Since the matrix Z contains an exponential number of cuts, Eq. 7 is still intractable. Instead we consider an approximation using a constraint set ˆZ which is a subset 3 of columns of Z. In previous work [20], we showed that since the optimal multicut may no longer lie in the span of the reduced cut matrix ˆZ, it is useful to allow some values of ˆZγ exceed 1 (see Figure 1(a) for an example). We introduce a slack vector β ≥0 that tracks the presence of any “overcut” edges and prevents them from contributing to the objective when the corresponding edge weight is negative. Let θ− e = min(θe, 0) denote the non-positive component of θe. The expanded multi-cut objective is given by: min γ≥0 β≥0 θ · ˆZγ −θ−· β s.t. ˆZγ −β ≤1 (8) For any edge e such that θe < 0, any decrease in the objective from overcutting by an amount βe is exactly compensated for in the objective by the term −θ− e βe. When ˆZ contains all cuts (i.e., ˆZ = Z) then Eq 7 and Eq 8 are equivalent [20]. Further, if γ⋆is the minimizer of Eq 8 when ˆZ only contains a subset of columns, then the edge indicator vector given by X = min(1, ˆZγ⋆) still satisfies the cycle inequalities (see supplement for details). 4 Expanded LP for Finding the Closest Ultrametric To develop an LP relaxation of the closest ultrametric problem, we replace the multicut problem at each layer l with the expanded multicut objective described by Eq 8. We let γ = {γ1, γ2, γ3 . . . γL} and β = {β1, β2, β3 . . . βL} denote the collection of weights and slacks for the levels of the hierarchy and let θ+l e = max(0, θl e) and θ−l e = min(0, θl e) denote the positive and negative components of θl. To enforce hierarchical consistency between layers, we would like to add the constraint that Zγl+1 ≤Zγl. However, this constraint is too rigid when Z does not include all possible cuts. It is thus computationally useful to introduce an additional slack vector associated with each level l and edge e which we denote as α = {α1, α2, α3 . . . αL−1}. The introduction of αl e allows for cuts represented by Zγl to violate the hierarchical constraint. We modify the objective so that violations to the original hierarchy constraint are paid for in proportion to θ+l e . The introduction of α allows us to find valid ultrametrics while using a smaller number of columns of Z to be used than would otherwise be required (illustrated in Figure 1(b)). We call this relaxed closest ultrametric problem including the slack variable α the expanded closest ultrametric objective, written as: min γ≥0 β≥0 α≥0 L X l=1 θl · Zγl + L X l=1 −θ−l · βl + L−1 X l=1 θ+l · αl (9) s.t. Zγl+1 + αl+1 ≤Zγl + αl ∀l < L Zγl −βl ≤1 ∀l where by convention we define αL = 0 and we have dropped the constant l = 0 term from Eq 6. Given a solution (α, β, γ) we can recover a relaxed solution to the closest ultrametric problem (Eq. 6) over ΩL by setting Xl e = min(1, maxm≥l (Zγm)e). In the supplement, we demonstrate that for any (α, β, γ) that obeys the constraints in Eq 9, this thresholding operation yields a solution X that lies in ΩL and achieves the same or lower objective value. 5 The Dual Objective We optimize the dual of the objective in Eq 9 using an efficient column generation approach based on perfect matching. We introduce two sets of Lagrange multipliers ω = {ω1, ω2, ω3 . . . ωL−1} and λ = {λ1, λ2, λ3 . . . λL} corresponding to the between and within layer constraints respectively. For 4 Algorithm 1 Dual Closest Ultrametric via Cutting Planes ˆZl ←{} ∀l, residual ←−∞ while residual < 0 do {ω}, {λ} ←Solve Eq 10 given ˆZ residual = 0 for l = 1 : L do zl ←arg minz∈CUT(θl + λl + ωl−1 −ωl) · z residual ←residual + 3 2(θl + λl + ωl−1 −ωl) · zl {z(1), z(2), . . . , z(M)} ←isocuts(zl) ˆZl ←ˆZl ∪{z(1), z(2), . . . , z(M)} end for end while notational convenience, let ω0 = 0. The dual objective can then be written as max ω≥0,λ≥0 L X l=1 −λl · 1 (10) θ−l ≤−λl ∀l −(ωl−1 −ωl) ≤θ+l ∀l (θl + λl + ωl−1 −ωl) · Z ≥0 ∀l The dual LP can be interpreted as finding a small modification of the original edge weights θl so that every possible two-way cut of each resulting graph at level l has non-negative weight. Observe that the introduction of the two slack terms α and β in the primal problem (Eq 9) results in bounds on the Lagrange multipliers λ and ω in the dual problem in Eq 10. In practice these dual constraints turn out to be essential for efficient optimization and constitute the core contribution of this paper. 6 Solving the Dual via Cutting Planes The chief complexity of the dual LP is contained in the constraints including Z which encodes non-negativity of an exponential number of cuts of the graph represented by the columns of Z. To circumvent the difficulty of explicitly enumerating the columns of Z, we employ a cutting plane method that efficiently searches for additional violated constraints (columns of Z) which are then successively added. Let ˆZ denote the current working set of columns. Our dual optimization algorithm iterates over the following three steps: (1) Solve the dual LP with ˆZ, (2) find the most violated constraint of the form (θl + λl + ωl−1 −ωl) · Z ≥0 for layer l, (3) Append a column to the matrix ˆZ for each such cut found. We terminate when no violated constraints exist or a computational budget has been exceeded. Finding Violated Constraints: Identifying columns to add to ˆZ is carried out for each layer l separately. Finding the most violated constraint of the full problem corresponds to computing the minimum-weight cut of a graph with edge weights θl + λl + ωl−1 −ωl. If this cut has non-negative weight then all the constraints are satisfied, otherwise we add the corresponding cut indicator vector as an additional column of Z. To generate a new constraint for layer l based on the current Lagrange multipliers, we solve zl = arg min z∈CUT X e∈E (θl e + λl e + ωl−1 e −ωl e)ze (11) and subsequently add the new constraints from all layers to our LP, ˆZ ←[ ˆZ, z1, z2, . . . zL]. Unlike the multicut problem, finding a (two-way) cut in a planar graph can be solved exactly by a reduction to minimum-weight perfect matching. This is a classic result that, e.g. provides an exact solution for the ground state of a 2D lattice Ising model without a ferromagnetic field [13, 8, 9, 10] in O(N 3 2 log N) time [15]. 5 10 0 10 1 10 2 10 3 10 −4 10 −2 Time (sec) Bound UB LB 0.2 0.4 0.6 0.8 1 0 20 40 60 80 Objective ratio (UCM / UM) Counts Figure 2: (a): The average convergence of the upper (blue) and lower-bounds (red) as a function of running time. Values plotted are the gap between the bound and the best lower-bound computed (at termination) for a given problem instance. This relative gap is averaged over problem instances which have not yet converged at a given time point. We indicate the percentage of problem instances that have yet to terminate using black bars marking [95, 85, 75, 65, .....5] percent. (b) Histogram of the ratio of closest ultrametric objective values for our algorithm (UM) and the baseline clustering produced by UCM. All ratios were less than 1 showing that in no instances did UM produce a worse solution than UCM Computing a lower bound: At a given iteration, prior to adding a newly generated set of constraints we can compute the total residual constraint violation over all layers of hierarchy by ∆= P l(θl + λl + ωl−1 −ωl) · zl. In the supplement we demonstrate that the value of the dual objective plus 3 2∆is a lower-bound on the relaxed closest ultrametric problem in Eq 9. Thus, as the costs of the minimum-weight matchings approach zero from below, the objective of the reduced problem over ˆZ approaches an accurate lower-bound on optimization over ¯ΩL Expanding generated cut constraints: When a given cut zl produces more than two connected components, we found it useful to add a constraint corresponding to each component, following the approach of [20]. Let the number of connected components of zl be denoted M. For each of the M components then we add one column to Z corresponding to the cut that isolates that connected component from the rest. This allows more flexibility in representing the final optimum multicut as superpositions of these components. In addition, we also found it useful in practice to maintain a separate set of constraints ˆZl for each layer l. Maintaining independent constraints ˆZ1, ˆZ2, . . . , ˆZL can result in a smaller overall LP. Speeding convergence of ω: We found that adding an explicit penalty term to the objective that encourages small values of ω speeds up convergence dramatically with no loss in solution quality. In our experiments, this penalty is scaled by a parameter ϵ = 10−4 which is chosen to be extremely small in magnitude relative to the values of θ so that it only has an influence when no other “forces” are acting on a given term in ω. Primal Decoding: Algorithm 1 gives a summary of the dual solver which produces a lower-bound as well as a set of cuts described by the constraint matrices ˆZl. The subroutine isocuts(zl) computes the set of cuts that isolate each connected component of zl. To generate a hierarchical clustering, we solve the primal, Eq 9, using the reduced set ˆZ in order to recover a fractional solution Xl e = min(1, maxm≥l( ˆZmγm)e). We use an LP solver (IBM CPLEX) which provides this primal solution “for free” when solving the dual in Alg. 1. We round the fractional primal solution X to a discrete hierarchical clustering by thresholding: ¯Xl e ←[Xl e > t]. We then repair (uncut) any cut edges that lie inside a connected component. In our implementation we test a few discrete thresholds t ∈{0, 0.2, 0.4, 0.6, 0.8} and take that threshold that yields ¯X with the lowest cost. After each pass through the loop of Alg. 1 we compute these upper-bounds and retain the optimum solution observed thus far. 6 0 0.2 0.4 0.6 0.8 1 0.4 0.5 0.6 0.7 0.8 0.9 1 Recall Precision UCM UCM−L UM 10 0 10 1 10 2 10 3 0.3 0.4 0.5 0.6 0.7 Time (sec) Maximum F−measure UM UCM−L UCM Figure 3: (a) Boundary detection performance of our closest ultrametric algorithm (UM) and the baseline ultrametric contour maps algorithm with (UCM) and without (UCM-L) length weighting [5] on BSDS. Black circles indicate thresholds used in the closest UM optimization. (b) Anytime performance: F-measure on the BSDS benchmark as a function of run-time. UM, UCM with and without length weighting achieve a maximum F-measure of 0.728, 0.726, and 0.718 respectively. 7 Experiments We applied our algorithm to segmenting images from the Berkeley Segmentation Data set (BSDS) [16]. We use superpixels generated by performing an oriented watershed transform on the output of the global probability of boundary (gPb) edge detector [17] and construct a planar graph whose vertices are superpixels with edges connecting neighbors in the image plane whose base distance θ is derived from gPb. Let gPbe be the local estimate of boundary contrast given by averaging the gPb classifier output over the boundary between a pair of neighboring superpixels. We truncate extreme values to enforce that gPbe ∈[ϵ, 1 −ϵ] with ϵ = 0.001 and set θe = log gP be 1−gP be + log 1−ϵ ϵ The additive offset assures that θe ≥0. In our experiments we use a fixed set of eleven distance threshold levels {δl} chosen to uniformly span the useful range of threshold values [9.6, 12.6]. Finally, we weighted edges proportionally to the length of the corresponding boundary in the image. We performed dual cutting plane iterations until convergence or 2000 seconds had passed. Lowerbounds for the BSDS segmentations were on the order of −103 or −104. We terminate when the total residual is greater than −2 × 10−4. All codes were written in MATLAB using the Blossom V implementation of minimum-weight perfect matching [15] and the IBM ILOG CPLEX LP solver with default options. Baseline: We compare our results with the hierarchical clusterings produced by the Ultrametric Contour Map (UCM) [5]. UCM performs agglomerative clustering of superpixels and assigns the length-weighted averaged gPb value as the distance between each pair of merged regions. While UCM was not explicitly designed to find the closest ultrametric, it provides a strong baseline for hierarchical clustering. To compute the closest l-level ultrametric corresponding to the UCM clustering result, we solve the minimization in Eq. 6 while restricting each multicut to be the partition at some level of the UCM hierarchy. Convergence and Timing: Figure 2 shows the average behavior of convergence as a function of runtime. We found the upper-bound given by the cost of the decoded integer solution and the lowerbound estimated by the dual LP are very close. The integrality gap is typically within 0.1% of the lower-bound and never more than 1 %. Convergence of the dual is achieved quite rapidly; most instances require less than 100 iterations to converge with roughly linear growth in the size of the LP at each iteration as cutting planes are added. In Fig 2 we display a histogram, computed over test image problem instances, of the cost of UCM solutions relative to those produced by closest ultrametric (UM) estimated by our method. A ratio of less than 1 indicates that our approach generated a solution with a lower distortion ultrametric. In no problem instance did UCM outperform our UM algorithm. 7 UM MC UM MC Figure 4: The proposed closest ultrametric (UM) enforces consistency across levels while performing independent multi-cut clustering (MC) at each threshold does not guarantee a hierarchical segmentation (c.f. first image, columns 3 and 4). In the second image, hierarchical segmentation (UM) better preserves semantic parts of the two birds while correctly merging the background regions. Segmentation Quality: Figure 3 shows the segmentation benchmark accuracy of our closest ultrametric algorithm (denoted UM) along with the baseline ultrametric contour maps algorithm (UCM) with and without length weighting [5]. In terms of segmentation accuracy, UM performs nearly identically to the state of the art UCM algorithm with some small gains in the high-precision regime. It is worth noting that the BSDS benchmark does not provide strong penalties for small leaks between two segments when the total number of boundary pixels involved is small. Our algorithm may find strong application in domains where the local boundary signal is noisier (e.g., biological imaging) or when under-segmentation is more heavily penalized. While our cutting-plane approach is slower than agglomerative clustering, it is not necessary to wait for convergence in order to produce high quality results. We found that while the upper and lower bounds decrease as a function of time, the clustering performance as measured by precision-recall is often nearly optimal after only ten seconds and remains stable. Figure 3 shows a plot of the F-measure achieved by UM as a function of time. Importance of enforcing hierarchical constraints: Although independently finding multicuts at different thresholds often produces hierarchical clusterings, this is by no means guaranteed. We ran Algorithm 1 while setting ωl e = 0, allowing each layer to be solved independently. Fig 4 shows examples where hierarchical constraints between layers improves segmentation quality relative to independent clustering at each threshold. 8 Conclusion We have introduced a new method for approximating the closest ultrametric on planar graphs that is applicable to hierarchical image segmentation. Our contribution is a dual cutting plane approach that exploits the introduction of novel slack terms that allow for representing a much larger space of solutions with relatively few cutting planes. This yields an efficient algorithm that provides rigorous bounds on the quality the resulting solution. We empirically observe that our algorithm rapidly produces compelling image segmentations along with lower- and upper-bounds that are nearly tight on the benchmark BSDS test data set. Acknowledgements: JY acknowledges the support of Experian, CF acknowledges support of NSF grants IIS-1253538 and DBI-1262547 8 References [1] Nir Ailon and Moses Charikar. Fitting tree metrics: Hierarchical clustering and phylogeny. In Foundations of Computer Science, 2005., pages 73–82, 2005. [2] Bjoern Andres, Joerg H. Kappes, Thorsten Beier, Ullrich Kothe, and Fred A. Hamprecht. Probabilistic image segmentation with closedness constraints. In Proc. of ICCV, pages 2611–2618, 2011. [3] Bjoern Andres, Thorben Kroger, Kevin L. Briggman, Winfried Denk, Natalya Korogod, Graham Knott, Ullrich Kothe, and Fred. A. Hamprecht. Globally optimal closed-surface segmentation for connectomics. In Proc. of ECCV, 2012. [4] Bjoern Andres, Julian Yarkony, B. S. Manjunath, Stephen Kirchhoff, Engin Turetken, Charless Fowlkes, and Hanspeter Pfister. Segmenting planar superpixel adjacency graphs w.r.t. nonplanar superpixel affinity graphs. In Proc. of EMMCVPR, 2013. [5] Pablo Arbelaez, Michael Maire, Charless Fowlkes, and Jitendra Malik. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 33(5):898–916, May 2011. [6] Yoram Bachrach, Pushmeet Kohli, Vladimir Kolmogorov, and Morteza Zadimoghaddam. Optimal coalition structure generation in cooperative graph games. In Proc. of AAAI, 2013. [7] Shai Bagon and Meirav Galun. Large scale correlation clustering. In CoRR, abs/1112.2903, 2011. [8] F Barahona. On the computational complexity of ising spin glass models. Journal of Physics A: Mathematical, Nuclear and General, 15(10):3241–3253, april 1982. [9] F Barahona. On cuts and matchings in planar graphs. Mathematical Programming, 36(2):53– 68, november 1991. [10] F Barahona and A Mahjoub. On the cut polytope. Mathematical Programming, 60(1-3):157– 173, September 1986. [11] Thorsten Beier, Thorben Kroeger, Jorg H Kappes, Ullrich Kothe, and Fred A Hamprecht. Cut, glue, and cut: A fast, approximate solver for multicut partitioning. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 73–80, 2014. [12] Michel Deza and Monique Laurent. Geometry of cuts and metrics, volume 15. Springer Science & Business Media, 1997. [13] Michael Fisher. On the dimer solution of planar ising models. Journal of Mathematical Physics, 7(10):1776–1781, 1966. [14] Sungwoong Kim, Sebastian Nowozin, Pushmeet Kohli, and Chang Dong Yoo. Higher-order correlation clustering for image segmentation. In Advances in Neural Information Processing Systems,25, pages 1530–1538, 2011. [15] Vladimir Kolmogorov. Blossom v: a new implementation of a minimum cost perfect matching algorithm. Mathematical Programming Computation, 1(1):43–67, 2009. [16] David Martin, Charless Fowlkes, Doron Tal, and Jitendra Malik. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proc. of ICCV, pages 416–423, 2001. [17] David Martin, Charless C. Fowlkes, and Jitendra Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach. Intell., 26(5):530–549, May 2004. [18] Julian Yarkony. Analyzing PlanarCC. NIPS 2014 workshop, 2014. [19] Julian Yarkony, Thorsten Beier, Pierre Baldi, and Fred A Hamprecht. Parallel multicut segmentation via dual decomposition. In New Frontiers in Mining Complex Patterns, 2014. [20] Julian Yarkony, Alexander Ihler, and Charless Fowlkes. Fast planar correlation clustering for image segmentation. In Proc. of ECCV, 2012. [21] Chong Zhang, Julian Yarkony, and Fred A. Hamprecht. Cell detection and segmentation using correlation clustering. In MICCAI, volume 8673, pages 9–16, 2014. 9 | 2015 | 90 |
5,984 | Sparse Local Embeddings for Extreme Multi-label Classification Kush Bhatia†, Himanshu Jain§, Purushottam Kar‡∗, Manik Varma†, and Prateek Jain† †Microsoft Research, India §Indian Institute of Technology Delhi, India ‡Indian Institute of Technology Kanpur, India {t-kushb,prajain,manik}@microsoft.com himanshu.j689@gmail.com, purushot@cse.iitk.ac.in Abstract The objective in extreme multi-label learning is to train a classifier that can automatically tag a novel data point with the most relevant subset of labels from an extremely large label set. Embedding based approaches attempt to make training and prediction tractable by assuming that the training label matrix is low-rank and reducing the effective number of labels by projecting the high dimensional label vectors onto a low dimensional linear subspace. Still, leading embedding approaches have been unable to deliver high prediction accuracies, or scale to large problems as the low rank assumption is violated in most real world applications. In this paper we develop the SLEEC classifier to address both limitations. The main technical contribution in SLEEC is a formulation for learning a small ensemble of local distance preserving embeddings which can accurately predict infrequently occurring (tail) labels. This allows SLEEC to break free of the traditional low-rank assumption and boost classification accuracy by learning embeddings which preserve pairwise distances between only the nearest label vectors. We conducted extensive experiments on several real-world, as well as benchmark data sets and compared our method against state-of-the-art methods for extreme multi-label classification. Experiments reveal that SLEEC can make significantly more accurate predictions then the state-of-the-art methods including both embedding-based (by as much as 35%) as well as tree-based (by as much as 6%) methods. SLEEC can also scale efficiently to data sets with a million labels which are beyond the pale of leading embedding methods. 1 Introduction In this paper we develop SLEEC (Sparse Local Embeddings for Extreme Classification), an extreme multi-label classifier that can make significantly more accurate and faster predictions, as well as scale to larger problems, as compared to state-of-the-art embedding based approaches. eXtreme Multi-label Learning (XML) addresses the problem of learning a classifier that can automatically tag a data point with the most relevant subset of labels from a large label set. For instance, there are more than a million labels (categories) on Wikipedia and one might wish to build a classifier that annotates a new article or web page with the subset of most relevant Wikipedia categories. It should be emphasized that multi-label learning is distinct from multi-class classification where the aim is to predict a single mutually exclusive label. Challenges: XML is a hard problem that involves learning with hundreds of thousands, or even millions, of labels, features and training points. Although, some of these problems can be ameliorated ∗This work was done while P.K. was a postdoctoral researcher at Microsoft Research India. 1 using a label hierarchy, such hierarchies are unavailable in many applications [1, 2]. In this setting, an obvious baseline is thus provided by the 1-vs-All technique which seeks to learn an an independent classifier per label. As expected, this technique is infeasible due to the prohibitive training and prediction costs given the large number of labels. Embedding-based approaches: A natural way of overcoming the above problem is to reduce the effective number of labels. Embedding based approaches try to do so by projecting label vectors onto a low dimensional space, based on an assumption that the label matrix is low-rank. More specifically, given a set of n training points {(xi, yi)n i=1} with d-dimensional feature vectors xi ∈Rd and Ldimensional label vectors yi ∈{0, 1}L, state-of-the-art embedding approaches project the label vectors onto a lower bL-dimensional linear subspace as zi = Uyi. Regressors are then trained to predict zi as Vxi. Labels for a novel point x are predicted by post-processing y = U†Vx where U† is a decompression matrix which lifts the embedded label vectors back to the original label space. Embedding methods mainly differ in the choice of their compression and decompression techniques such as compressed sensing [3], Bloom filters [4], SVD [5], landmark labels [6, 7], output codes [8], etc. The state-of-the-art LEML algorithm [9] directly optimizes for U†, V using a regularized least squares objective. Embedding approaches have many advantages including simplicity, ease of implementation, strong theoretical foundations, the ability to handle label correlations, as well as adapt to online and incremental scenarios. Consequently, embeddings have proved to be the most popular approach for tackling XML problems [6, 7, 10, 4, 11, 3, 12, 9, 5, 13, 8, 14]. Embedding approaches also have limitations – they are slow at training and prediction even for small embedding dimensions bL. For instance, on WikiLSHTC [15, 16], a Wikipedia based challenge data set, LEML with bL = 500 takes ∼12 hours to train even with early termination whereas prediction takes nearly 300 milliseconds per test point. In fact, for text applications with bd-sparse feature vectors such as WikiLSHTC (where bd = 42 ≪bL = 500), LEML’s prediction time Ω(bL( bd + L)) can be an order of magnitude more than even 1-vs-All’s prediction time O( bdL). More importantly, the critical assumption made by embedding methods, that the training label matrix is low-rank, is violated in almost all real world applications. Figure 1(a) plots the approximation error in the label matrix as bL is varied on the WikiLSHTC data set. As is clear, even with a 500dimensional subspace the label matrix still has 90% approximation error. This happens primarily due to the presence of hundreds of thousands of “tail” labels (Figure 1(b)) which occur in at most 5 data points each and, hence, cannot be well approximated by any linear low dimensional basis. The SLEEC approach: Our algorithm SLEEC extends embedding methods in multiple ways to address these limitations. First, instead of globally projecting onto a linear low-rank subspace, SLEEC learns embeddings zi which non-linearly capture label correlations by preserving the pairwise distances between only the closest (rather than all) label vectors, i.e. d(zi, zj) ≈d(yi, yj) only if i ∈kNN(j) where d is a distance metric. Regressors V are trained to predict zi = Vxi. We propose a novel formulation for learning such embeddings that can be formally shown to consistently preserve nearest neighbours in the label space. We build an efficient pipeline for training these embeddings which can be orders of magnitude faster than state-of-the-art embedding methods. During prediction, rather than using a decompression matrix, SLEEC uses a k-nearest neighbour (kNN) classifier in the embedding space, thus leveraging the fact that nearest neighbours have been preserved during training. Thus, for a novel point x, the predicted label vector is obtained using y = P i:Vxi∈kNN(Vx) yi. The use of a kNN classifier is well motivated as kNN outperforms discriminative methods in acutely low training data regimes [17] as is the case with tail labels. The superiority of SLEEC’s proposed embeddings over traditional low-rank embeddings can be seen by looking at Figure 1, which shows that the relative approximation error in learning SLEEC’s embeddings is significantly smaller as compared to the low-rank approximation error. Moreover, we also find that SLEEC can improve the prediction accuracy of state-of-the-art embedding methods by as much as 35% (absolute) on the challenging WikiLSHTC data set. SLEEC also significantly outperforms methods such as WSABIE [13] which also use kNN classification in the embedding space but learn their embeddings using the traditional low-rank assumption. Clustering based speedup: However, kNN classifiers are known to be slow at prediction. SLEEC therefore clusters the training data into C clusters, learning a separate embedding per cluster and performing kNN classification within the test point’s cluster alone. This allows SLEEC to be more 2 100 200 300 400 500 0 0.5 1 Approximation Rank Approximation Error Global SVD Local SVD SLEEC NN Obj 0 1 2 3 4 x 10 5 1e0 1e1 1e2 1e3 1e4 1e5 Label ID Active Documents 2 4 6 8 10 75 80 85 90 Number of Clusters Precision@1 Wiki10 SLEEC LocalLEML (a) (b) (c) Figure 1: (a) error ∥Y −YbL∥2 F /∥Y ∥2 F in approximating the label matrix Y . Global SVD denotes the error incurred by computing the rank bL SVD of Y . Local SVD computes rank bL SVD of Y within each cluster. SLEEC NN objective denotes SLEEC’s objective function. Global SVD incurs 90% error and the error is decreasing at most linearly as well. (b) shows the number of documents in which each label is present for the WikiLSHTC data set. There are about 300K labels which are present in < 5 documents lending it a ‘heavy tailed’ distribution. (c) shows Precision@1 accuracy of SLEEC and localLEML on the Wiki-10 data set as we vary the number of clusters. than two orders of magnitude faster at prediction than LEML and other embedding methods on the WikiLSHTC data. In fact, SLEEC also scales well to the Ads1M data set involving a million labels which is beyond the pale of leading embedding methods. Moreover, the clustering trick does not significantly benefit other state-of-the-art methods (see Figure 1(c), thus indicating that SLEEC’s embeddings are key to its performance boost. Since clustering can be unstable in large dimensions, SLEEC compensates by learning a small ensemble where each individual learner is generated by a different random clustering. This was empirically found to help tackle instabilities of clustering and significantly boost prediction accuracy with only linear increases in training and prediction time. For instance, on WikiLSHTC, SLEEC’s prediction accuracy was 55% with an 8 millisecond prediction time whereas LEML could only manage 20% accuracy while taking 300 milliseconds for prediction per test point. Tree-based approaches: Recently, tree based methods [1, 15, 2] have also become popular for XML as they enjoy significant accuracy gains over the existing embedding methods. For instance, FastXML [15] can achieve a prediction accuracy of 49% on WikiLSHTC using a 50 tree ensemble. However, using SLEEC, we are now able to extend embedding methods to outperform tree ensembles, achieving 49.8% with 2 learners and 55% with 10. Thus, SLEEC obtains the best of both worlds – achieving the highest prediction accuracies across all methods on even the most challenging data sets, as well as retaining all the benefits of embeddings and eschewing the disadvantages of large tree ensembles such as large model size and lack of theoretical understanding. 2 Method Let D = {(x1, y1) . . . (xn, yn)} be the given training data set, xi ∈X ⊆Rd be the input feature vector, yi ∈Y ⊆{0, 1}L be the corresponding label vector, and yij = 1 iff the j-th label is turned on for xi. Let X = [x1, . . . , xn] be the data matrix and Y = [y1, . . . , yn] be the label matrix. Given D, the goal is to learn a multi-label classifier f : Rd →{0, 1}L that accurately predicts the label vector for a given test point. Recall that in XML settings, L is very large and is of the same order as n and d, ruling out several standard approaches such as 1-vs-All. We now present our algorithm SLEEC which is designed primarily to scale efficiently for large L. Our algorithm is an embedding-style algorithm, i.e., during training we map the label vectors yi to bL-dimensional vectors zi ∈RbL and learn a set of regressors V ∈RbL×d s.t. zi ≈V xi, ∀i. During the test phase, for an unseen point x, we first compute its embedding V x and then perform kNN over the set [V x1, V x2, . . . , V xn]. To scale our algorithm, we perform a clustering of all the training points and apply the above mentioned procedures in each of the cluster separately. Below, we first discuss our method to compute the embeddings zis and the regressors V . Section 2.2 then discusses our approach for scaling the method to large data sets. 2.1 Learning Embeddings As mentioned earlier, our approach is motivated by the fact that a typical real-world data set tends to have a large number of tail labels that ensure that the label matrix Y cannot be well-approximated using a low-dimensional linear subspace (see Figure 1). However, Y can still be accurately modeled 3 Algorithm 1 SLEEC: Train Algorithm Require: D = {(x1, y1) . . . (xn, yn)}, embedding dimensionality: bL, no. of neighbors: ¯n, no. of clusters: C, regularization parameter: λ, µ, L1 smoothing parameter ρ 1: Partition X into Q1, .., QC using k-means 2: for each partition Qj do 3: Form Ωusing ¯n nearest neighbors of each label vector yi ∈Qj 4: [U Σ] ←SVP(PΩ(Y jY jT ), bL) 5: Zj ←UΣ 1 2 6: V j ←ADMM(Xj, Zj, λ, µ, ρ) 7: Zj = V jXj 8: end for 9: Output: {(Q1, V 1, Z1), . . . , (QC, V C, ZC} Algorithm 2 SLEEC: Test Algorithm Require: Test point: x, no. of NN: ¯n, no. of desired labels: p 1: Qτ: partition closest to x 2: z ←V τx 3: Nz ←¯n nearest neighbors of z in Zτ 4: Px ←empirical label dist. for points ∈Nz 5: ypred ←Topp(Px) Sub-routine 3 SLEEC: SVP Require: Observations: G, index set: Ω, dimensionality: bL 1: M1 := 0, η = 1 2: repeat 3: c M ←M + η(G −PΩ(M)) 4: [U Σ] ←Top-EigenDecomp(c M, bL) 5: Σii ←max(0, Σii), ∀i 6: M ←U · Σ · U T 7: until Convergence 8: Output: U, Σ Sub-routine 4 SLEEC: ADMM Require: Data Matrix : X, Embeddings : Z, Regularization Parameter : λ, µ, Smoothing Parameter : ρ 1: β := 0, α := 0 2: repeat 3: Q ←(Z + ρ(α −β))X⊤ 4: V ←Q(XX⊤(1 + ρ) + λI)−1 5: α ←(V X + β) 6: αi = sign(αi) · max(0, |αi| −µ ρ ), ∀i 7: β ←β + V X −alpha 8: until Convergence 9: Output: V using a low-dimensional non-linear manifold. That is, instead of preserving distances (or inner products) of a given label vector to all the training points, we attempt to preserve the distance to only a few nearest neighbors. That is, we wish to find a bL-dimensional embedding matrix Z = [z1, . . . , zn] ∈RbL×n which minimizes the following objective: min Z∈R b L×n ∥PΩ(Y T Y ) −PΩ(ZT Z)∥2 F + λ∥Z∥1, (1) where the index set Ωdenotes the set of neighbors that we wish to preserve, i.e., (i, j) ∈Ωiff j ∈Ni. Ni denotes a set of nearest neighbors of i. We select Ni = arg maxS,|S|≤α·n P j∈S(yT i yj), which is the set of α · n points with the largest inner products with yi. |N| is always chosen large enough so that distances (inner products) to a few far away points are also preserved while optimizing for our objective function. This prohibits non-neighboring points from entering the immediate neighborhood of any given point. PΩ: Rn×n →Rn×n is defined as: (PΩ(Y T Y ))ij = ⟨yi, yj⟩, if (i, j) ∈Ω, 0, otherwise. (2) Also, we add L1 regularization, ∥Z∥1 = P i ∥zi∥1, to the objective function to obtain sparse embeddings. Sparse embeddings have three key advantages: a) they reduce prediction time, b) reduce the size of the model, and c) avoid overfitting. Now, given the embeddings Z = [z1, . . . , zn] ∈RbL×n, we wish to learn a multi-regression model to predict the embeddings Z using the input features. That is, we require that Z ≈V X where V ∈RbL×d. Combining the two formulations and adding an L2-regularization for V , we get: min V ∈R b L×d ∥PΩ(Y T Y ) −PΩ(XT V T V X)∥2 F + λ∥V ∥2 F + µ∥V X∥1. (3) Note that the above problem formulation is somewhat similar to a few existing methods for nonlinear dimensionality reduction that also seek to preserve distances to a few near neighbors [18, 19]. However, in contrast to our approach, these methods do not have a direct out of sample generalization, do not scale well to large-scale data sets, and lack rigorous generalization error bounds. Optimization: We first note that optimizing (3) is a significant challenge as the objective function is non-convex as well as non-differentiable. Furthermore, our goal is to perform optimization for data 4 sets where L, n, d ≫100, 000. To this end, we divide the optimization into two phases. We first learn embeddings Z = [z1, . . . , zn] and then learn regressors V in the second stage. That is, Z is obtained by directly solving (1) but without the L1 penalty term: min Z,Z∈R b L×n ∥PΩ(Y T Y ) −PΩ(ZT Z)∥2 F ≡ min M⪰0, rank(M)≤bL ∥PΩ(Y T Y ) −PΩ(M)∥2 F , (4) where M = ZT Z. Next, V is obtained by solving the following problem: min V ∈R b L×d ∥Z −V X∥2 F + λ∥V ∥2 F + µ∥V X∥1. (5) Note that the Z matrix obtained using (4) need not be sparse. However, we store and use V X as our embeddings, so that sparsity is still maintained. Optimizing (4): Note that even the simplified problem (4) is an instance of the popular low-rank matrix completion problem and is known to be NP-hard in general. The main challenge arises due to the non-convex rank constraint on M. However, using the Singular Value Projection (SVP) method [20], a popular matrix completion method, we can guarantee convergence to a local minima. SVP is a simple projected gradient descent method where the projection is onto the set of low-rank matrices. That is, the t-th step update for SVP is given by: Mt+1 = PbL(Mt + ηPΩ(Y T Y −Mt)), (6) where Mt is the t-th step iterate, η > 0 is the step-size, and PbL(M) is the projection of M onto the set of rank-bL positive semi-definite definite (PSD) matrices. Note that while the set of rankbL PSD matrices is non-convex, we can still project onto this set efficiently using the eigenvalue decomposition of M. That is, if M = UMΛMU T M be the eigenvalue decomposition of M. Then, PbL(M) = UM(1 : r) · ΛM(1 : r) · UM(1 : r)T , where r = min(bL, bL+ M) and bL+ M is the number of positive eigenvalues of M. ΛM(1 : r) denotes the top-r eigenvalues of M and UM(1 : r) denotes the corresponding eigenvectors. While the above update restricts the rank of all intermediate iterates Mt to be at most bL, computing rank-bL eigenvalue decomposition can still be fairly expensive for large n. However, by using special structure in the update (6), one can significantly reduce eigenvalue decomposition’s computation complexity as well. In general, the eigenvalue decomposition can be computed in time O(bLζ) where ζ is the time complexity of computing a matrix-vector product. Now, for SVP update (6), matrix has special structure of ˆ M = Mt + ηPΩ(Y T Y −Mt). Hence ζ = O(nbL + n¯n) where ¯n = |Ω|/n2 is the average number of neighbors preserved by SLEEC. Hence, the per-iteration time complexity reduces to O(nbL2 + nbL¯n) which is linear in n, assuming ¯n is nearly constant. Optimizing (5): (5) contains an L1 term which makes the problem non-smooth. Moreover, as the L1 term involves both V and X, we cannot directly apply the standard prox-function based algorithms. Instead, we use the ADMM method to optimize (5). See Sub-routine 4 for the updates and [21] for a detailed derivation of the algorithm. Generalization Error Analysis: Let P be a fixed (but unknown) distribution over X × Y. Let each training point (xi, yi) ∈D be sampled i.i.d. from P. Then, the goal of our non-linear embedding method (3) is to learn an embedding matrix A = V T V that preserves nearest neighbors (in terms of label distance/intersection) of any (x, y) ∼P. The above requirements can be formulated as the following stochastic optimization problem: min A⪰0 rank(A)≤k L(A) = E (x,y),(ex,ey)∼Pℓ(A; (x, y), (ex, ey)), (7) where the loss function ℓ(A; (x, y), (ex, ey)) = g(⟨ey, y⟩)(⟨ey, y⟩−exT Ax)2, and g(⟨ey, y⟩) = I [⟨ey, y⟩≥τ], where I [·] is the indicator function. Hence, a loss is incurred only if y and ˜y have a large inner product. For an appropriate selection of the neighborhood selection operator Ω, (3) indeed minimizes a regularized empirical estimate of the loss function (7), i.e., it is a regularized ERM w.r.t. (7). 5 We now show that the optimal solution bA to (3) indeed minimizes the loss (7) upto an additive approximation error. The existing techniques for analyzing excess risk in stochastic optimization require the empirical loss function to be decomposable over the training set, and as such do not apply to (3) which contains loss-terms with two training points. Still, using techniques from the AUC maximization literature [22], we can provide interesting excess risk bounds for Problem (7). Theorem 1. With probability at least 1−δ over the sampling of the dataset D, the solution ˆA to the optimization problem (3) satisfies L( ˆA) ≤inf A∗∈A n L(A∗) + E-Risk(n) z }| { C ¯L2 + r2 + ∥A∗∥2 F R4 r 1 n log 1 δ o , where ˆA is the minimizer of (3), r = ¯L λ and A := n A ∈Rd×d : A ⪰0, rank(A) ≤bL o . See Appendix A for a proof of the result. Note that the generalization error bound is independent of both d and L, which is critical for extreme multi-label classification problems with large d, L. In fact, the error bound is only dependent on ¯L ≪L, which is the average number of positive labels per data point. Moreover, our bound also provides a way to compute best regularization parameter λ that minimizes the error bound. However, in practice, we set λ to be a fixed constant. Theorem 1 only preserves the population neighbors of a test point. Theorem 7, given in Appendix A, extends Theorem 1 to ensure that the neighbors in the training set are also preserved. We would also like to stress that our excess risk bound is universal and hence holds even if ˆA does not minimize (3), i.e., L( ˆA) ≤L(A∗) + E-Risk(n) + (L( ˆA) −L(ˆ(A∗)), where E-Risk(n) is given in Theorem 1. 2.2 Scaling to Large-scale Data sets For large-scale data sets, one might require the embedding dimension bL to be fairly large (say a few hundreds) which might make computing the updates (6) infeasible. Hence, to scale to such large data sets, SLEEC clusters the given datapoints into smaller local region. Several text-based data sets indeed reveal that there exist small local regions in the feature-space where the number of points as well as the number of labels is reasonably small. Hence, we can train our embedding method over such local regions without significantly sacrificing overall accuracy. We would like to stress that despite clustering datapoints in homogeneous regions, the label matrix of any given cluster is still not close to low-rank. Hence, applying a state-of-the-art linear embedding method, such as LEML, to each cluster is still significantly less accurate when compared to our method (see Figure 1). Naturally, one can cluster the data set into an extremely large number of regions, so that eventually the label matrix is low-rank in each cluster. However, increasing the number of clusters beyond a certain limit might decrease accuracy as the error incurred during the cluster assignment phase itself might nullify the gain in accuracy due to better embeddings. Figure 1 illustrates this phenomenon where increasing the number of clusters beyond a certain limit in fact decreases accuracy of LEML. Algorithm 1 provides a pseudo-code of our training algorithm. We first cluster the datapoints into C partitions. Then, for each partition we learn a set of embeddings using Sub-routine 3 and then compute the regression parameters V τ, 1 ≤τ ≤C using Sub-routine 4. For a given test point x, we first find out the appropriate cluster τ. Then, we find the embedding z = V τx. The label vector is then predicted using k-NN in the embedding space. See Algorithm 2 for more details. Owing to the curse-of-dimensionality, clustering turns out to be quite unstable for data sets with large d and in many cases leads to some drop in prediction accuracy. To safeguard against such instability, we use an ensemble of models generated using different sets of clusters. We use different initialization points in our clustering procedure to obtain different sets of clusters. Our empirical results demonstrate that using such ensembles leads to significant increase in accuracy of SLEEC (see Figure 2) and also leads to stable solutions with small variance (see Table 4). 3 Experiments Experiments were carried out on some of the largest XML benchmark data sets demonstrating that SLEEC could achieve significantly higher prediction accuracies as compared to the state-of-the-art. It is also demonstrated that SLEEC could be faster at training and prediction than leading embedding techniques such as LEML. 6 0 5 10 30 40 50 60 Model Size (GB) Precision@1 WikiLSHTC [L= 325K, d = 1.61M, n = 1.77M] SLEEC FastXML LocalLEML−Ens 0 5 10 15 30 40 50 60 Number of Learners Precision@1 WikiLSHTC [L= 325K, d = 1.61M, n = 1.77M] SLEEC FastXML LocalLEML−ENS 0 5 10 15 60 70 80 90 Number of Learners Precision@1 Wiki10 [L= 30K, d = 101K, n = 14K] SLEEC FastXML LocalLEML−Ens (a) (b) (c) Figure 2: Variation in Precision@1 accuracy with model size and the number of learners on large-scale data sets. Clearly, SLEEC achieves better accuracy than FastXML and LocalLEML-Ensemble at every point of the curve. For WikiLSTHC, SLEEC with a single learner is more accurate than LocalLEML-Ensemble with even 15 learners. Similarly, SLEEC with 2 learners achieves more accuracy than FastXML with 50 learners. Data sets: Experiments were carried out on multi-label data sets including Ads1M [15] (1M labels), Amazon [23] (670K labels), WikiLSHTC (320K labels), DeliciousLarge [24] (200K labels) and Wiki10 [25] (30K labels). All the data sets are publically available except Ads1M which is proprietary and is included here to test the scaling capabilities of SLEEC. Unfortunately, most of the existing embedding techniques do not scale to such large data sets. We therefore also present comparisons on publically available small data sets such as BibTeX [26], MediaMill [27], Delicious [28] and EURLex [29]. (Table 2 in the appendix lists their statistics). Baseline algorithms: This paper’s primary focus is on comparing SLEEC to state-of-the-art methods which can scale to the large data sets such as embedding based LEML [9] and tree based FastXML [15] and LPSR [2]. Na¨ıve Bayes was used as the base classifier in LPSR as was done in [15]. Techniques such as CS [3], CPLST [30], ML-CSSP [7], 1-vs-All [31] could only be trained on the small data sets given standard resources. Comparisons between SLEEC and such techniques are therefore presented in the supplementary material. The implementation for LEML and FastXML was provided by the authors. We implemented the remaining algorithms and ensured that the published results could be reproduced and were verified by the authors wherever possible. Hyper-parameters: Most of SLEEC’s hyper-parameters were kept fixed including the number of clusters in a learner ⌊NTrain/6000⌋ , embedding dimension (100 for the small data sets and 50 for the large), number of learners in the ensemble (15), and the parameters used for optimizing (3). The remaining two hyper-parameters, the k in kNN and the number of neighbours considered during SVP, were both set by limited validation on a validation set. The hyper-parameters for all the other algorithms were set using fine grained validation on each data set so as to achieve the highest possible prediction accuracy for each method. In addition, all the embedding methods were allowed a much larger embedding dimension (0.8L) than SLEEC (100) to give them as much opportunity as possible to outperform SLEEC. Evaluation Metrics: We evaluated algorithms using metrics that have been widely adopted for XML and ranking tasks. Precision at k (P@k) is one such metric that counts the fraction of correct predictions in the top k scoring labels in ˆy, and has been widely utilized [1, 3, 15, 13, 2, 9]. We use the ranking measure nDCG@k as another evaluation metric. We refer the reader to the supplementary material – Appendix B.1 and Tables 5 and 6 – for further descriptions of the metrics and results. Results on large data sets with more than 100K labels: Table 1a compares SLEEC’s prediction accuracy, in terms of P@k (k= {1, 3, 5}), to all the leading methods that could be trained on five such data sets. SLEEC could improve over the leading embedding method, LEML, by as much as 35% and 15% in terms of P@1 and P@5 on WikiLSHTC. Similarly, SLEEC outperformed LEML by 27% and 22% in terms of P@1 and P@5 on the Amazon data set which also has many tail labels. The gains on the other data sets are consistent, but smaller, as the tail label problem is not so acute. SLEEC also outperforms the leading tree method, FastXML, by 6% in terms of both P@1 and P@5 on WikiLSHTC and Wiki10 respectively. This demonstrates the superiority of SLEEC’s overall pipeline constructed using local distance preserving embeddings followed by kNN classification. SLEEC also has better scaling properties as compared to all other embedding methods. In particular, apart from LEML, no other embedding approach could scale to the large data sets and, even LEML could not scale to Ads1M with a million labels. In contrast, a single SLEEC learner could be learnt on WikiLSHTC in 4 hours on a single core and already gave ∼20% improvement in P@1 over LEML (see Figure 2 for the variation in accuracy vs SLEEC learners). In fact, SLEEC’s training 7 Table 1: Precision Accuracies (a) Large-scale data sets : Our proposed method SLEEC is as much as 35% more accurate in terms of P@1 and 22% in terms of P@5 than LEML, a leading embedding method. Other embedding based methods do not scale to the large-scale data sets; we compare against them on small-scale data sets in Table 3. SLEEC is also 6% more accurate (w.r.t. P@1 and P@5) than FastXML, a state-of-theart tree method. ‘-’ indicates LEML could not be run with the standard resources. (b) Small-scale data sets : SLEEC consistently outperforms state of the art approaches. WSABIE, which also uses kNN classifier on its embeddings is significantly less accurate than SLEEC on all the data sets, showing the superiority of our embedding learning algorithm. (a) Data set SLEEC LEML FastXML LPSR-NB Wiki10 P@1 85.54 73.50 82.56 72.71 P@3 73.59 62.38 66.67 58.51 P@5 63.10 54.30 56.70 49.40 Delicious-Large P@1 47.03 40.30 42.81 18.59 P@3 41.67 37.76 38.76 15.43 P@5 38.88 36.66 36.34 14.07 WikiLSHTC P@1 55.57 19.82 49.35 27.43 P@3 33.84 11.43 32.69 16.38 P@5 24.07 8.39 24.03 12.01 Amazon P@1 35.05 8.13 33.36 28.65 P@3 31.25 6.83 29.30 24.88 P@5 28.56 6.03 26.12 22.37 Ads-1m P@1 21.84 23.11 17.08 P@3 14.30 13.86 11.38 P@5 11.01 10.12 8.83 (b) Data set SLEEC LEML FastXML WSABIE OneVsAll BibTex P@1 65.57 62.53 63.73 54.77 61.83 P@3 40.02 38.4 39.00 32.38 36.44 P@5 29.30 28.21 28.54 23.98 26.46 Delicious P@1 68.42 65.66 69.44 64.12 65.01 P@3 61.83 60.54 63.62 58.13 58.90 P@5 56.80 56.08 59.10 53.64 53.26 MediaMill P@1 87.09 84.00 84.24 81.29 83.57 P@3 72.44 67.19 67.39 64.74 65.50 P@5 58.45 52.80 53.14 49.82 48.57 EurLEX P@1 80.17 61.28 68.69 70.87 74.96 P@3 65.39 48.66 57.73 56.62 62.92 P@5 53.75 39.91 48.00 46.2 53.42 time on WikiLSHTC was comparable to that of tree based FastXML. FastXML trains 50 trees in 7 hours on a single core to achieve a P@1 of 49.37% whereas SLEEC could achieve 49.98% by training 2 learners in 8 hours. Similarly, SLEEC’s training time on Ads1M was 6 hours per learner on a single core. SLEEC’s predictions could also be up to 300 times faster than LEMLs. For instance, on WikiLSHTC, SLEEC made predictions in 8 milliseconds per test point as compared to LEML’s 279. SLEEC therefore brings the prediction time of embedding methods to be much closer to that of tree based methods (FastXML took 0.5 milliseconds per test point on WikiLSHTC) and within the acceptable limit of most real world applications. Effect of clustering and multiple learners: As mentioned in the introduction, other embedding methods could also be extended by clustering the data and then learning a local embedding in each cluster. Ensembles could also be learnt from multiple such clusterings. We extend LEML in such a fashion, and refer to it as LocalLEML, by using exactly the same 300 clusters per learner in the ensemble as used in SLEEC for a fair comparison. As can be seen in Figure 2, SLEEC significantly outperforms LocalLEML with a single SLEEC learner being much more accurate than an ensemble of even 10 LocalLEML learners. Figure 2 also demonstrates that SLEEC’s ensemble can be much more accurate at prediction as compared to the tree based FastXML ensemble (the same plot is also presented in the appendix depicting the variation in accuracy with model size in RAM rather than the number of learners in the ensemble). The figure also demonstrates that very few SLEEC learners need to be trained before accuracy starts saturating. Finally, Table 4 shows that the variance in SLEEC s prediction accuracy (w.r.t. different cluster initializations) is very small, indicating that the method is stable even though clustering in more than a million dimensions. Results on small data sets: Table 3, in the appendix, compares the performance of SLEEC to several popular methods including embeddings, trees, kNN and 1-vs-All SVMs. Even though the tail label problem is not acute on these data sets, and SLEEC was restricted to a single learner, SLEEC’s predictions could be significantly more accurate than all the other methods (except on Delicious where SLEEC was ranked second). For instance, SLEEC could outperform the closest competitor on EurLex by 3% in terms of P1. Particularly noteworthy is the observation that SLEEC outperformed WSABIE [13], which performs kNN classification on linear embeddings, by as much as 10% on multiple data sets. This demonstrates the superiority of SLEEC’s local distance preserving embeddings over the traditional low-rank embeddings. Acknowledgments We are grateful to Abhishek Kadian for helping with the experiments. Himanshu Jain is supported by a Google India PhD Fellowship at IIT Delhi 8 References [1] R. Agrawal, A. Gupta, Y. Prabhu, and M. Varma. Multi-label learning with millions of labels: Recommending advertiser bid phrases for web pages. In WWW, pages 13–24, 2013. [2] J. Weston, A. Makadia, and H. Yee. Label partitioning for sublinear ranking. In ICML, 2013. [3] D. Hsu, S. Kakade, J. Langford, and T. Zhang. Multi-label prediction via compressed sensing. In NIPS, 2009. [4] M. Ciss´e, N. Usunier, T. Arti`eres, and P. Gallinari. Robust bloom filters for large multilabel classification tasks. In NIPS, pages 1851–1859, 2013. [5] F. Tai and H.-T. Lin. Multi-label classification with principal label space transformation. In Workshop proceedings of learning from multi-label data, 2010. [6] K. Balasubramanian and G. Lebanon. The landmark selection method for multiple output prediction. In ICML, 2012. [7] W. Bi and J.T.-Y. Kwok. Efficient multi-label classification with many labels. In ICML, 2013. [8] Y. Zhang and J. G. Schneider. Multi-label output codes using canonical correlation analysis. In AISTATS, pages 873–882, 2011. [9] H.-F. Yu, P. Jain, P. Kar, and I. S. Dhillon. Large-scale multi-label learning with missing labels. ICML, 2014. [10] Y.-N. Chen and H.-T. Lin. Feature-aware label space dimension reduction for multi-label classification. In NIPS, pages 1538–1546, 2012. [11] C.-S. Feng and H.-T. Lin. Multi-label classification with error-correcting codes. JMLR, 20, 2011. [12] S. Ji, L. Tang, S. Yu, and J. Ye. Extracting shared subspace for multi-label classification. In KDD, 2008. [13] J. Weston, S. Bengio, and N. Usunier. Wsabie: Scaling up to large vocabulary image annotation. In IJCAI, 2011. [14] Z. Lin, G. Ding, M. Hu, and J. Wang. Multi-label classification via feature-aware implicit label space encoding. In ICML, pages 325–333, 2014. [15] Y. Prabhu and M. Varma. FastXML: a fast, accurate and stable tree-classifier for extreme multi-label learning. In KDD, pages 263–272, 2014. [16] Wikipedia dataset for the 4th large scale hierarchical text classification challenge, 2014. [17] A. Ng and M. Jordan. On Discriminative vs. Generative classifiers: A comparison of logistic regression and naive Bayes. In NIPS, 2002. [18] K. Q. Weinberger and L. K. Saul. An introduction to nonlinear dimensionality reduction by maximum variance unfolding. In AAAI, pages 1683–1686, 2006. [19] B. Shaw and T. Jebara. Minimum volume embedding. In AISTATS, pages 460–467, 2007. [20] P. Jain, R. Meka, and I. S. Dhillon. Guaranteed rank minimization via singular value projection. In NIPS, pages 937–945, 2010. [21] P. Sprechmann, R. Litman, T. B. Yakar, A. Bronstein, and G. Sapiro. Efficient Supervised Sparse Analysis and Synthesis Operators. In NIPS, 2013. [22] P. Kar, K. B. Sriperumbudur, P. Jain, and H. Karnick. On the Generalization Ability of Online Learning Algorithms for Pairwise Loss Functions. In ICML, 2013. [23] J. Leskovec and A. Krevl. SNAP Datasets: Stanford large network dataset collection, 2014. [24] R. Wetzker, C. Zimmermann, and C. Bauckhage. Analyzing social bookmarking systems: A del.icio.us cookbook. In Mining Social Data (MSoDa) Workshop Proceedings, ECAI, pages 26–30, July 2008. [25] A. Zubiaga. Enhancing navigation on wikipedia with social tags, 2009. [26] I. Katakis, G. Tsoumakas, and I. Vlahavas. Multilabel text classification for automated tag suggestion. In Proceedings of the ECML/PKDD 2008 Discovery Challenge, 2008. [27] C. Snoek, M. Worring, J. van Gemert, J.-M. Geusebroek, and A. Smeulders. The challenge problem for automated detection of 101 semantic concepts in multimedia. In ACM Multimedia, 2006. [28] G. Tsoumakas, I. Katakis, and I. Vlahavas. Effective and effcient multilabel classification in domains with large number of labels. In ECML/PKDD, 2008. [29] J. Menc´ıa E. L.and F¨urnkranz. Efficient pairwise multilabel classification for large-scale problems in the legal domain. In ECML/PKDD, 2008. [30] Y.-N. Chen and H.-T. Lin. Feature-aware label space dimension reduction for multi-label classification. In NIPS, pages 1538–1546, 2012. [31] B. Hariharan, S. V. N. Vishwanathan, and M. Varma. Efficient max-margin multi-label classification with applications to zero-shot learning. ML, 2012. 9 | 2015 | 91 |
5,985 | Super-Resolution Off the Grid Qingqing Huang MIT, EECS, LIDS, qqh@mit.edu Sham M. Kakade University of Washington, Department of Statistics, Computer Science & Engineering, sham@cs.washington.edu Abstract Super-resolution is the problem of recovering a superposition of point sources using bandlimited measurements, which may be corrupted with noise. This signal processing problem arises in numerous imaging problems, ranging from astronomy to biology to spectroscopy, where it is common to take (coarse) Fourier measurements of an object. Of particular interest is in obtaining estimation procedures which are robust to noise, with the following desirable statistical and computational properties: we seek to use coarse Fourier measurements (bounded by some cutoff frequency); we hope to take a (quantifiably) small number of measurements; we desire our algorithm to run quickly. Suppose we have k point sources in d dimensions, where the points are separated by at least ∆from each other (in Euclidean distance). This work provides an algorithm with the following favorable guarantees: • The algorithm uses Fourier measurements, whose frequencies are bounded by O(1/∆) (up to log factors). Previous algorithms require a cutoff frequency which may be as large as ⌦( p d/∆). • The number of measurements taken by and the computational complexity of our algorithm are bounded by a polynomial in both the number of points k and the dimension d, with no dependence on the separation ∆. In contrast, previous algorithms depended inverse polynomially on the minimal separation and exponentially on the dimension for both of these quantities. Our estimation procedure itself is simple: we take random bandlimited measurements (as opposed to taking an exponential number of measurements on the hypergrid). Furthermore, our analysis and algorithm are elementary (based on concentration bounds for sampling and the singular value decomposition). 1 Introduction We follow the standard mathematical abstraction of this problem (Candes & Fernandez-Granda [4, 3]): consider a d-dimensional signal x(t) modeled as a weighted sum of k Dirac measures in Rd: x(t) = k X j=1 wjδµ(j), (1) where the point sources, the µ(j)’s, are in Rd. Assume that the weights wj are complex valued, whose absolute values are lower and upper bounded by some positive constant. Assume that we are given k, the number of point sources1. 1An upper bound of the number of point sources suffices. 1 Define the measurement function f(s) : Rd ! C to be the convolution of the point source x(t) with a low-pass point spread function ei⇡<s,t> as below: f(s) = Z t2Rd ei⇡<t,s>x(dt) = k X j=1 wjei⇡<µ(j),s>. (2) In the noisy setting, the measurements are corrupted by uniformly bounded perturbation z: ef(s) = f(s) + z(s), |z(s)| ✏z, 8s. (3) Suppose that we are only allowed to measure the signal x(t) by evaluating the measurement function ef(s) at any s 2 Rd, and we want to recover the parameters of the point source signal, i.e., {wj, µ(j) : j 2 [k]}. We follow the standard normalization to assume that: µ(j) 2 [−1, +1]d, |wj| 2 [0, 1] 8j 2 [k]. Let wmin = minj |wj| denote the minimal weight, and let ∆be the minimal separation of the point sources defined as follows: ∆= min j6=j0 kµ(j) −µ(j0)k2, (4) where we use the Euclidean distance between the point sources for ease of exposition2. These quantities are key parameters in our algorithm and analysis. Intuitively, the recovery problem is harder if the minimal separation ∆is small and the minimal weight wmin is small. The first question is that, given exact measurements, namely ✏z = 0, where and how many measurements should we take so that the original signal x(t) can be exactly recovered. Definition 1.1 (Exact recovery). In the exact case, i.e. ✏z = 0, we say that an algorithm achieves exact recovery with m measurements of the signal x(t) if, upon input of these m measurements, the algorithm returns the exact set of parameters {wj, µ(j) : j 2 [k]}. Moreover, we want the algorithm to be measurement noise tolerant, in the sense that in the presence of measurement noise we can still recover good estimates of the point sources. Definition 1.2 (Stable recovery). In the noisy case, i.e., ✏z ≥0, we say that an algorithm achieves stable recovery with m measurements of the signal x(t) if, upon input of these m measurements, the algorithm returns estimates { bwj, bµ(j) : j 2 [k]} such that min ⇡max n kbµ(j) −µ(⇡(j))k2 : j 2 [k] o poly(d, k)✏z, where the min is over permutations ⇡on [k] and poly(d,k) is a polynomial function in d and k. By definition, if an algorithm achieves stable recovery with m measurements, it also achieves exact recovery with these m measurements. The terminology of “super-resolution” is appropriate due to the following remarkable result (in the noiseless case) of Donoho [9]: suppose we want to accurately recover the point sources to an error of γ, where γ ⌧∆. Naively, we may expect to require measurements whose frequency depends inversely on the desired the accuracy γ. Donoho [9] showed that it suffices to obtain a finite number of measurements, whose frequencies are bounded by O(1/∆), in order to achieve exact recovery; thus resolving the point sources far more accurately than that which is naively implied by using frequencies of O(1/∆). Furthermore, the work of Candes & Fernandez-Granda [4, 3] showed that stable recovery, in the univariate case (d = 1), is achievable with a cutoff frequency of O(1/∆) using a convex program and a number of measurements whose size is polynomial in the relevant quantities. 2Our claims hold withut using the “wrap around metric”, as in [4, 3], due to our random sampling. Also, it is possible to extend these results for the `p-norm case. 2 d = 1 d ≥1 cutoff freq measurements runtime cutoff freq measurements runtime SDP 1 ∆ k log(k) log( 1 ∆) poly( 1 ∆, k) Cd ∆1 ( 1 ∆1 )d poly(( 1 ∆1 )d, k) MP 1 ∆ 1 ∆ ( 1 ∆)3 Ours 1 ∆ (k log(k))2 (k log(k))2 log(kd) ∆ (k log(k) + d)2 (k log(k) + d)2 Table 1: See Section 1.2 for description. See Lemma 2.3 for details about the cutoff frequency. Here, we are implicitly using O(·) notation. 1.1 This work We are interested in stable recovery procedures with the following desirable statistical and computational properties: we seek to use coarse (low frequency) measurements; we hope to take a (quantifiably) small number of measurements; we desire our algorithm run quickly. Informally, our main result is as follows: Theorem 1.3 (Informal statement of Theorem 2.2). For a fixed probability of error, the proposed algorithm achieves stable recovery with a number of measurements and with computational runtime that are both on the order of O((k log(k) + d)2). Furthermore, the algorithm makes measurements which are bounded in frequency by O(1/∆) (ignoring log factors). Notably, our algorithm and analysis directly deal with the multivariate case, with the univariate case as a special case. Importantly, the number of measurements and the computational runtime do not depend on the minimal separation of the point sources. This may be important even in certain low dimensional imaging applications where taking physical measurements are costly (indeed, superresolution is important in settings where ∆is small). Furthermore, our technical contribution of how to decompose a certain tensor constructed with Fourier measurements may be of broader interest to related questions in statistics, signal processing, and machine learning. 1.2 Comparison to related work Table 1 summarizes the comparisons between our algorithm and the existing results. The multidimensional cutoff frequency we refer to in the table is the maximal coordinate-wise entry of any measurement frequency s (i.e. ksk1). “SDP” refers to the semidefinite programming (SDP) based algorithms of Candes & Fernandez-Granda [3, 4]; in the univariate case, the number of measurements can be reduced by the method in Tang et. al. [23] (this is reflected in the table). “MP” refers to the matrix pencil type of methods, studied in [14] and [15] for the univariate case. Here, we are defining the infinity norm separation as ∆1 = minj6=j0 kµ(j) −µ(j0)k1, which is understood as the wrap around distance on the unit circle. Cd ≥1 is a problem dependent constant (discussed below). Observe the following differences between our algorithm and prior work: 1) Our minimal separation is measured under the `2-norm instead of the infinity norm, as in the SDP based algorithm. Note that ∆1 depends on the coordinate system; in the worst case, it can underestimate the separation by a 1/ p d factor, namely ∆1 ⇠∆/ p d. 2) The computation complexity and number of measurements are polynomial in dimension d and the number of point sources k, and surprisingly do not depend on the minimal separation of the point sources! Intuitively, when the minimal separation between the point sources is small, the problem should be harder, this is only reflected in the sampling range and the cutoff frequency of the measurements in our algorithm. 3) Furthermore, one could project the multivariate signal to the coordinates and solve multiple univariate problems (such as in [19, 17], which provided only exact recovery results). Naive random projections would lead to a cutoff frequency of O( p d/∆). 3 SDP approaches: The work in [3, 4, 10] formulates the recovery problem as a total-variation minimization problem; they then show the dual problem can be formulated as an SDP. They focused on the analysis of d = 1 and only explicitly extend the proofs for d = 2. For d ≥1, Ingham-type theorems (see [20, 12]) suggest that Cd = O( p d). The number of measurements can be reduced by the method in [23] for the d = 1 case, which is noted in the table. Their method uses sampling “off the grid”; technically, their sampling scheme is actually sampling random points from the grid, though with far fewer measurements. Matrix pencil approaches: The matrix pencil method, MUSIC and Prony’s method are essentially the same underlying idea, executed in different ways. The original Prony’s method directly attempts to find roots of a high degree polynomial, where the root stability has few guarantees. Other methods aim to robustify the algorithm. Recently, for the univariate matrix pencil method, Liao & Fannjiang [14] and Moitra [15] provide a stability analysis of the MUSIC algorithm. Moitra [15] studied the optimal relationship between the cutoff frequency and ∆, showing that if the cutoff frequency is less than 1/∆, then stable recovery is not possible with matrix pencil method (with high probability). 1.3 Notation Let R, C, and Z to denote real, complex, and natural numbers. For d 2 Z, [d] denotes the set [d] = {1, . . . , d}. For a set S, |S| denotes its cardinality. We use ⊕to denote the direct sum of sets, namely S1 ⊕S2 = {(a + b) : a 2 S1, b 2 S2}. Let en to denote the n-th standard basis vector in Rd, for n 2 [d]. Let Pd R,2 = {x 2 Rd : kxk2 = 1} to denote the d-sphere of radius R in the d-dimensional standard Euclidean space. Denote the condition number of a matrix X 2 Rm⇥n as cond2(X) = σmax(X)/σmin(X), where σmax(X) and σmin(X) are the maximal and minimal singular values of X. We use ⌦to denote tensor product. Given matrices A, B, C 2 Cm⇥k, the tensor product V = A ⌦B ⌦C 2 Cm⇥m⇥m is equivalent to Vi1,i2,i3 = Pk n=1 Ai1,nBi2,nCi3,n. Another view of tensor is that it defines a multi-linear mapping. For given dimension mA, mB, mC the mapping V (·, ·, ·) : Cm⇥mA ⇥Cm⇥mB ⇥Cm⇥mC ! CmA⇥mB⇥mC is defined as: [V (XA, XB, Xc)]i1,i2,i3 = X j1,j2,j32[m] Vj1,j2,j3[XA]j1,i1[XB]j2,i2[XC]j3,i3. In particular, for a 2 Cm, we use V (I, I, a) to denote the projection of tensor V along the 3rd dimension. Note that if the tensor admits a decomposition V = A ⌦B ⌦C, it is straightforward to verify that V (I, I, a) = ADiag(C>a)B>. It is well-known that if the factors A, B, C have full column rank then the rank k decomposition is unique up to re-scaling and common column permutation. Moreover, if the condition number of the factors are upper bounded by a positive constant, then one can compute the unique tensor decomposition V with stability guarantees (See [1] for a review. Lemma 2.5 herein provides an explicit statement.). 2 Main Results 2.1 The algorithm We briefly describe the steps of Algorithm 1 below: (Take measurements) Given positive numbers m and R, randomly draw a sampling set S = ( s(1), . . . s(m) of m i.i.d. samples of the Gaussian distribution N(0, R2Id⇥d). Form the set S0 = S [ {s(m+1) = e1, . . . , s(m+d) = ed, s(m+d+1) = 0} ⇢Rd. Denote m0 = m + d + 1. Take another independent random sample v from the unit sphere, and define v(1) = v, v(2) = 2v. 4 Input: R, m, noisy measurement function ef(·). Output: Estimates { bwj, bµ(j) : j 2 [k]}. 1. Take measurements: Let S = {s(1), . . . , s(m)} be m i.i.d. samples from the Gaussian distribution N(0, R2Id⇥d). Set s(m+n) = en for all n 2 [d] and s(m+n+1) = 0. Denote m0 = m + d + 1. Take another random samples v from the unit sphere, and set v(1) = v and v(2) = 2v. Construct a tensor eF 2 Cm0⇥m0⇥3: eFn1,n2,n3 = ef(s) ** s=s(n1)+s(n2)+v(n3). 2. Tensor Decomposition: Set (bVS0, bDw) = TensorDecomp( eF). For j = 1, . . . , k, set [bVS0]j = [bVS0]j/[bVS0]m0,j 3. Read of estimates: For j = 1, . . . , k, set bµ(j) = Real(log([bVS][m+1:m+d,j])/(i⇡)). 4. Set c W = arg minW 2Ck k bF −bVS0 ⌦bVS0 ⌦bVdDwkF . Algorithm 1: General algorithm Construct the 3rd order tensor eF 2 Cm0⇥m0⇥3 with noise corrupted measurements ef(s) evaluated at the points in S0 ⊕S0 ⊕{v(1), v(2)}, arranged in the following way: eFn1,n2,n3 = ef(s) ** s=s(n1)+s(n2)+v(n3), 8n1, n2 2 [m0], n3 2 [2]. (5) (Tensor decomposition) Define the characteristic matrix VS to be: VS = 2 66664 ei⇡<µ(1),s(1)> . . . ei⇡<µ(k),s(1)> ei⇡<µ(1),s(2)> . . . ei⇡<µ(k),s(2)> ... . . . ... ei⇡<µ(1),s(m)> . . . ei⇡<µ(k),s(m)> 3 77775 . (6) and define matrix V 0 2 Cm0⇥k to be VS0 = " VS Vd 1, . . . , 1 # , (7) where Vd 2 Cd⇥k is defined in (17). Define V2 = 2 4 ei⇡<µ(1),v(1)> . . . ei⇡<µ(k),v(1)> ei⇡<µ(1),v(2)> . . . ei⇡<µ(k),v(2)> 1 . . . 1 3 5 . Note that in the exact case (✏z = 0) the tensor F constructed in (5) admits a rank-k decomposition: F = VS0 ⌦VS0 ⌦(V2Dw), (8) Assume that VS0 has full column rank, then this tensor decomposition is unique up to column permutation and rescaling with very high probability over the randomness of the random unit vector v. Since each element of VS0 has unit norm, and we know that the last row of VS0 and the last row of V2 are all ones, there exists a proper scaling so that we can uniquely recover wj’s and columns of VS0 up to common permutation. In this paper, we adopt Jennrich’s algorithm (see Algorithm 2) for tensor decomposition. Other algorithms, for example tensor power method ([1]) and recursive projection ([24]), which are possibly more stable than Jennrich’s algorithm, can also be applied here. (Read off estimates) Let log(Vd) denote the element-wise logarithm of Vd. The estimates of the point sources are given by: h µ(1), µ(2), . . . , µ(k)i = log(Vd) i⇡ . 5 Input: Tensor eF 2 Cm⇥m⇥3, rank k. output: Factor bV 2 Cm⇥k. 1. Compute the truncated SVD of eF(I, I, e1) = bP b⇤bP > with the k leading singular values. 2. Set bE = eF( bP, bP, I). Set bE1 = bE(I, I, e1) and bE2 = bE(I, I, e2). 3. Let the columns of bU be the eigenvectors of bE1 bE−1 2 corresponding to the k eigenvalues with the largest absolute value. 4. Set bV = pm bP bU. Algorithm 2: TensorDecomp Remark 2.1. In the toy example, the simple algorithm corresponds to using the sampling set S0 = {e1, . . . , ed}. The conventional univariate matrix pencil method corresponds to using the sampling set S0 = {0, 1, . . . , m} and the set of measurements S0 ⊕S0 ⊕S0 corresponds to the grid [m]3. 2.2 Guarantees In this section, we discuss how to pick the two parameters m and R and prove that the proposed algorithm indeed achieves stable recovery in the presence of measurement noise. Theorem 2.2 (Stable recovery). There exists a universal constant C such that the following holds. Fix ✏x, δs, δv 2 (0, 1 2); pick m such that m ≥max n k ✏x q 8 log k δs , d o ; for d = 1, pick R ≥ p 2 log(1+2/✏x) ⇡∆ ; for d ≥2, pick R ≥ p 2 log(k/✏x) ⇡∆ . Assume the bounded measurement noise model as in (3) and that ✏z ∆δvw2 min 100 p dk5 ⇣ 1−2✏x 1+2✏x ⌘2.5 . With probability at least (1−δs) over the random sampling of S, and with probability at least (1−δv) over the random projections in Algorithm 2, the proposed Algorithm 1 returns an estimation of the point source signal bx(t) = Pk j=1 bwjbδµ(j) with accuracy: min ⇡max n kbµ(j) −µ(⇡(j))k2 : j 2 [k] o C p dk5 ∆δv wmax w2 min ✓1 + 2✏x 1 −2✏x ◆2.5 ✏z, where the min is over permutations ⇡on [k]. Moreover, the proposed algorithm has time complexity in the order of O((m0)3). The next lemma shows that essentially, with overwhelming probability, all the frequencies taken concentrate within the hyper-cube with cutoff frequency R0 on each coordinate, where R0 is comparable to R, Lemma 2.3 (The cutoff frequency). For d > 1, with high probability, all of the 2(m0)2 sampling frequencies in S0⊕S0⊕{v(1), v(2)} satisfy that ks(j1)+s(j2)+v(j3)k1 R0, 8j1, j2 2 [m], j3 2 [2], where the per-coordinate cutoff frequency is given by R0 = O(Rplog md). For d = 1 case, the cutoff frequency R0 can be made to be in the order of R0 = O(1/∆). Remark 2.4 (Failure probability). Overall, the failure probability consists of two pieces: δv for random projection of v, and δs for random sampling to ensure the bounded condition number of VS. This may be boosed to arbitrarily high probability through repetition. 6 2.3 Key Lemmas Stability of tensor decomposition: In this paragraph, we give a brief description and the stability guarantee of the well-known Jennrich’s algorithm ([11, 13]) for low rank 3rd order tensor decomposition. We only state it for the symmetric tensors as appeared in the proposed algorithm. Consider a tensor F = V ⌦V ⌦(V2Dw) 2 Cm⇥m⇥3 where the factor V has full column rank k. Then the decomposition is unique up to column permutation and rescaling, and Algorithm 2 finds the factors efficiently. Moreover, the eigen-decomposition is stable if the factor V is well-conditioned and the eigenvalues of FaF † b are well separated. Lemma 2.5 (Stability of Jennrich’s algorithm). Consider the 3rd order tensor F = V ⌦V ⌦ (V2Dw) 2 Cm⇥m⇥3 of rank k m, constructed as in Step 1 in Algorithm 1. Given a tensor eF that is element-wise close to F, namely for all n1, n2, n3 2 [m], ** eFn1,n2,n3 − Fn1,n2,n3 ** ✏z, and assume that the noise is small ✏z ∆δvw2 min 100 p dkwmaxcond2(V )5 . Use eF as the input to Algorithm 2. With probability at least (1 −δv) over the random projections v(1) and v(2), we can bound the distance between columns of the output bV and that of V by: min ⇡max j n kbVj −V⇡(j)k2 : j 2 [k] o C p dk2 ∆δv wmax w2 min cond2(V )5✏z, (9) where C is a universal constant. Condition number of VS0: The following lemma is helpful: Lemma 2.6. Let VS0 2 C(m+d+1)⇥k be the factor as defined in (7). Recall that VS0 = [VS; Vd; 1], where Vd is defined in (17), and VS is the characteristic matrix defined in (6). We can bound the condition number of VS0 by cond2(VS0) q 1 + p kcond2(VS). (10) Condition number of the characteristic matrix VS: Therefore, the stability analysis of the proposed algorithm boils down to understanding the relation between the random sampling set S and the condition number of the characteristic matrix VS. This is analyzed in Lemma 2.8 (main technical lemma). Lemma 2.7. For any fixed number ✏x 2 (0, 1/2). Consider a Gaussian vector s with distribution N(0, R2Id⇥d), where R ≥ p 2 log(k/✏x) ⇡∆ for d ≥2, and R ≥ p 2 log(1+2/✏x) ⇡∆ for d = 1. Define the Hermitian random matrix Xs 2 Ck⇥k herm to be Xs = 2 66664 e−i⇡<µ(1),s> e−i⇡<µ(2),s> ... e−i⇡<µ(k),s> 3 77775 h ei⇡<µ(1),s>, ei⇡<µ(2),s>, . . . ei⇡<µ(k),s>i . (11) We can bound the spectrum of Es[Xs] by: (1 −✏x)Ik⇥k ⪯Es[Xs] ⪯(1 + ✏x)Ik⇥k. (12) Lemma 2.8 (Main technical lemma). In the same setting of Lemma 2.7, Let S = {s(1), . . . , s(m)} be m independent samples of the Gaussian vector s. For m ≥ k ✏x q 8 log k δs , with probability at least 1 −δs over the random sampling, the condition number of the factor VS is bounded by: cond2(VS) r 1 + 2✏x 1 −2✏x . (13) 7 Acknowledgments The authors thank Rong Ge and Ankur Moitra for very helpful discussions. Sham Kakade acknowledges funding from the Washington Research Foundation for innovation in Data-intensive Discovery. References [1] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky. Tensor decompositions for learning latent variable models. The Journal of Machine Learning Research, 15(1):2773–2832, 2014. [2] A. Anandkumar, D. Hsu, and S. M. Kakade. A method of moments for mixture models and hidden markov models. arXiv preprint arXiv:1203.0683, 2012. [3] E. J. Cand`es and C. Fernandez-Granda. Super-resolution from noisy data. Journal of Fourier Analysis and Applications, 19(6):1229–1254, 2013. [4] E. J. Cand`es and C. Fernandez-Granda. Towards a mathematical theory of super-resolution. Communications on Pure and Applied Mathematics, 67(6):906–956, 2014. [5] Y. Chen and Y. Chi. Robust spectral compressed sensing via structured matrix completion. Information Theory, IEEE Transactions on, 60(10):6576–6601, 2014. [6] S. Dasgupta. Learning mixtures of gaussians. In Foundations of Computer Science, 1999. 40th Annual Symposium on, pages 634–644. IEEE, 1999. [7] S. Dasgupta and A. Gupta. An elementary proof of a theorem of johnson and lindenstrauss. Random structures and algorithms, 22(1):60–65, 2003. [8] S. Dasgupta and L. J. Schulman. A two-round variant of em for gaussian mixtures. In Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence, pages 152–159. Morgan Kaufmann Publishers Inc., 2000. [9] D. L. Donoho. Superresolution via sparsity constraints. SIAM Journal on Mathematical Analysis, 23(5):1309–1331, 1992. [10] C. Fernandez-Granda. A Convex-programming Framework for Super-resolution. PhD thesis, Stanford University, 2014. [11] R. A. Harshman. Foundations of the parafac procedure: Models and conditions for an ”explanatory” multi-modal factor analysis. 1970. [12] V. Komornik and P. Loreti. Fourier series in control theory. Springer Science & Business Media, 2005. [13] S. Leurgans, R. Ross, and R. Abel. A decomposition for three-way arrays. SIAM Journal on Matrix Analysis and Applications, 14(4):1064–1083, 1993. [14] W. Liao and A. Fannjiang. Music for single-snapshot spectral estimation: Stability and superresolution. Applied and Computational Harmonic Analysis, 2014. [15] A. Moitra. The threshold for super-resolution via extremal functions. arXiv preprint arXiv:1408.1681, 2014. [16] E. Mossel and S. Roch. Learning nonsingular phylogenies and hidden markov models. In Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pages 366– 375. ACM, 2005. [17] S. Nandi, D. Kundu, and R. K. Srivastava. Noise space decomposition method for twodimensional sinusoidal model. Computational Statistics & Data Analysis, 58:147–161, 2013. [18] K. Pearson. Contributions to the mathematical theory of evolution. Philosophical Transactions of the Royal Society of London. A, pages 71–110, 1894. [19] D. Potts and M. Tasche. Parameter estimation for nonincreasing exponential sums by pronylike methods. Linear Algebra and its Applications, 439(4):1024–1039, 2013. [20] D. L. Russell. Controllability and stabilizability theory for linear partial differential equations: recent progress and open questions. Siam Review, 20(4):639–739, 1978. 8 [21] A. Sanjeev and R. Kannan. Learning mixtures of arbitrary gaussians. In Proceedings of the thirty-third annual ACM symposium on Theory of computing, pages 247–257. ACM, 2001. [22] G. Schiebinger, E. Robeva, and B. Recht. Superresolution without separation. arXiv preprint arXiv:1506.03144, 2015. [23] G. Tang, B. N. Bhaskar, P. Shah, and B. Recht. Compressed sensing off the grid. Information Theory, IEEE Transactions on, 59(11):7465–7490, 2013. [24] S. S. Vempala and Y. F. Xiao. Max vs min: Independent component analysis with nearly linear sample complexity. arXiv preprint arXiv:1412.2954, 2014. 9 | 2015 | 92 |
5,986 | Automatic Variational Inference in Stan Alp Kucukelbir Columbia University alp@cs.columbia.edu Rajesh Ranganath Princeton University rajeshr@cs.princeton.edu Andrew Gelman Columbia University gelman@stat.columbia.edu David M. Blei Columbia University david.blei@columbia.edu Abstract Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational inference algorithms requires tedious model-specific calculations; this makes it difficult for non-experts to use. We propose an automatic variational inference algorithm, automatic differentiation variational inference (advi); we implement it in Stan (code available), a probabilistic programming system. In advi the user provides a Bayesian model and a dataset, nothing else. We make no conjugacy assumptions and support a broad class of models. The algorithm automatically determines an appropriate variational family and optimizes the variational objective. We compare advi to mcmc sampling across hierarchical generalized linear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on a quarter million images. With advi we can use variational inference on any model we write in Stan. 1 Introduction Bayesian inference is a powerful framework for analyzing data. We design a model for data using latent variables; we then analyze data by calculating the posterior density of the latent variables. For machine learning models, calculating the posterior is often difficult; we resort to approximation. Variational inference (vi) approximates the posterior with a simpler distribution [1, 2]. We search over a family of simple distributions and find the member closest to the posterior. This turns approximate inference into optimization. vi has had a tremendous impact on machine learning; it is typically faster than Markov chain Monte Carlo (mcmc) sampling (as we show here too) and has recently scaled up to massive data [3]. Unfortunately, vi algorithms are difficult to derive. We must first define the family of approximating distributions, and then calculate model-specific quantities relative to that family to solve the variational optimization problem. Both steps require expert knowledge. The resulting algorithm is tied to both the model and the chosen approximation. In this paper we develop a method for automating variational inference, automatic differentiation variational inference (advi). Given any model from a wide class (specifically, probability models differentiable with respect to their latent variables), advi determines an appropriate variational family and an algorithm for optimizing the corresponding variational objective. We implement advi in Stan [4], a flexible probabilistic programming system. Stan describes a high-level language to define probabilistic models (e.g., Figure 2) as well as a model compiler, a library of transformations, and an efficient automatic differentiation toolbox. With advi we can now use variational inference on any model we write in Stan.1 (See Appendices F to J.) 1advi is available in Stan 2.8. See Appendix C. 1 102 103 900 600 300 0 Seconds Average Log Predictive ADVI NUTS [5] (a) Subset of 1000 images 102 103 104 800 400 0 400 Seconds Average Log Predictive B=50 B=100 B=500 B=1000 (b) Full dataset of 250 000 images Figure 1: Held-out predictive accuracy results | Gaussian mixture model (gmm) of the imageclef image histogram dataset. (a) advi outperforms the no-U-turn sampler (nuts), the default sampling method in Stan [5]. (b) advi scales to large datasets by subsampling minibatches of size B from the dataset at each iteration [3]. We present more details in Section 3.3 and Appendix J. Figure 1 illustrates the advantages of our method. Consider a nonconjugate Gaussian mixture model for analyzing natural images; this is 40 lines in Stan (Figure 10). Figure 1a illustrates Bayesian inference on 1000 images. The y-axis is held-out likelihood, a measure of model fitness; the xaxis is time on a log scale. advi is orders of magnitude faster than nuts, a state-of-the-art mcmc algorithm (and Stan’s default inference technique) [5]. We also study nonconjugate factorization models and hierarchical generalized linear models in Section 3. Figure 1b illustrates Bayesian inference on 250 000 images, the size of data we more commonly find in machine learning. Here we use advi with stochastic variational inference [3], giving an approximate posterior in under two hours. For data like these, mcmc techniques cannot complete the analysis. Related work. advi automates variational inference within the Stan probabilistic programming system [4]. This draws on two major themes. The first is a body of work that aims to generalize vi. Kingma and Welling [6] and Rezende et al. [7] describe a reparameterization of the variational problem that simplifies optimization. Ranganath et al. [8] and Salimans and Knowles [9] propose a black-box technique, one that only requires the model and the gradient of the approximating family. Titsias and Lázaro-Gredilla [10] leverage the gradient of the joint density for a small class of models. Here we build on and extend these ideas to automate variational inference; we highlight technical connections as we develop the method. The second theme is probabilistic programming. Wingate and Weber [11] study vi in general probabilistic programs, as supported by languages like Church [12], Venture [13], and Anglican [14]. Another probabilistic programming system is infer.NET, which implements variational message passing [15], an efficient algorithm for conditionally conjugate graphical models. Stan supports a more comprehensive class of nonconjugate models with differentiable latent variables; see Section 2.1. 2 Automatic Differentiation Variational Inference Automatic differentiation variational inference (advi) follows a straightforward recipe. First we transform the support of the latent variables to the real coordinate space. For example, the logarithm transforms a positive variable, such as a standard deviation, to the real line. Then we posit a Gaussian variational distribution to approximate the posterior. This induces a non-Gaussian approximation in the original variable space. Last we combine automatic differentiation with stochastic optimization to maximize the variational objective. We begin by defining the class of models we support. 2.1 Differentiable Probability Models Consider a dataset X D x1WN with N observations. Each xn is a discrete or continuous random vector. The likelihood p.X j / relates the observations to a set of latent random variables . Bayesian 2 xn ˛ D 1:5; D 1 N data { i n t N; // number of observations i n t x [N ] ; // d i s c r e t e - valued observations } parameters { // l a t e n t variable , must be p o s i t i v e real < lower=0> theta ; } model { // non - conjugate p r i o r f o r l a t e n t v a r i a b l e theta ~ weibull ( 1 . 5 , 1) ; // l i k e l i h o o d f o r (n in 1:N) x [ n ] ~ poisson ( theta ) ; } Figure 2: Specifying a simple nonconjugate probability model in Stan. analysis posits a prior density p./ on the latent variables. Combining the likelihood with the prior gives the joint density p.X; / D p.X j / p./. We focus on approximate inference for differentiable probability models. These models have continuous latent variables . They also have a gradient of the log-joint with respect to the latent variables r log p.X; /. The gradient is valid within the support of the prior supp.p.// D ˚ j 2 RK and p./ > 0 RK, where K is the dimension of the latent variable space. This support set is important: it determines the support of the posterior density and plays a key role later in the paper. We make no assumptions about conjugacy, either full or conditional.2 For example, consider a model that contains a Poisson likelihood with unknown rate, p.x j /. The observed variable x is discrete; the latent rate is continuous and positive. Place a Weibull prior on , defined over the positive real numbers. The resulting joint density describes a nonconjugate differentiable probability model. (See Figure 2.) Its partial derivative @=@ p.x; / is valid within the support of the Weibull distribution, supp.p.// D RC R. Because this model is nonconjugate, the posterior is not a Weibull distribution. This presents a challenge for classical variational inference. In Section 2.3, we will see how advi handles this model. Many machine learning models are differentiable. For example: linear and logistic regression, matrix factorization with continuous or discrete measurements, linear dynamical systems, and Gaussian processes. Mixture models, hidden Markov models, and topic models have discrete random variables. Marginalizing out these discrete variables renders these models differentiable. (We show an example in Section 3.3.) However, marginalization is not tractable for all models, such as the Ising model, sigmoid belief networks, and (untruncated) Bayesian nonparametric models. 2.2 Variational Inference Bayesian inference requires the posterior density p. j X/, which describes how the latent variables vary when conditioned on a set of observations X. Many posterior densities are intractable because their normalization constants lack closed forms. Thus, we seek to approximate the posterior. Consider an approximating density q. I / parameterized by . We make no assumptions about its shape or support. We want to find the parameters of q. I / to best match the posterior according to some loss function. Variational inference (vi) minimizes the Kullback-Leibler (kl) divergence from the approximation to the posterior [2], D arg min KL.q. I / k p. j X//: (1) Typically the kl divergence also lacks a closed form. Instead we maximize the evidence lower bound (elbo), a proxy to the kl divergence, L./ D Eq./ log p.X; / Eq./ log q. I / : The first term is an expectation of the joint density under the approximation, and the second is the entropy of the variational density. Maximizing the elbo minimizes the kl divergence [1, 16]. 2The posterior of a fully conjugate model is in the same family as the prior; a conditionally conjugate model has this property within the complete conditionals of the model [3]. 3 The minimization problem from Eq. (1) becomes D arg max L./ such that supp.q. I // supp.p. j X//: (2) We explicitly specify the support-matching constraint implied in the kl divergence.3 We highlight this constraint, as we do not specify the form of the variational approximation; thus we must ensure that q. I / stays within the support of the posterior, which is defined by the support of the prior. Why is vi difficult to automate? In classical variational inference, we typically design a conditionally conjugate model. Then the optimal approximating family matches the prior. This satisfies the support constraint by definition [16]. When we want to approximate models that are not conditionally conjugate, we carefully study the model and design custom approximations. These depend on the model and on the choice of the approximating density. One way to automate vi is to use black-box variational inference [8, 9]. If we select a density whose support matches the posterior, then we can directly maximize the elbo using Monte Carlo (mc) integration and stochastic optimization. Another strategy is to restrict the class of models and use a fixed variational approximation [10]. For instance, we may use a Gaussian density for inference in unrestrained differentiable probability models, i.e. where supp.p.// D RK. We adopt a transformation-based approach. First we automatically transform the support of the latent variables in our model to the real coordinate space. Then we posit a Gaussian variational density. The transformation induces a non-Gaussian approximation in the original variable space and guarantees that it stays within the support of the posterior. Here is how it works. 2.3 Automatic Transformation of Constrained Variables Begin by transforming the support of the latent variables such that they live in the real coordinate space RK. Define a one-to-one differentiable function T W supp.p.// ! RK and identify the transformed variables as D T ./. The transformed joint density g.X; / is g.X; / D p X; T 1./ ˇˇ det JT 1./ ˇˇ; where p is the joint density in the original latent variable space, and JT 1 is the Jacobian of the inverse of T . Transformations of continuous probability densities require a Jacobian; it accounts for how the transformation warps unit volumes [17]. (See Appendix D.) Consider again our running example. The rate lives in RC. The logarithm D T ./ D log./ transforms RC to the real line R. Its Jacobian adjustment is the derivative of the inverse of the logarithm, j det JT 1./j D exp./. The transformed density is g.x; / D Poisson.x j exp.// Weibull.exp./ I 1:5; 1/ exp./: Figures 3a and 3b depict this transformation. As we describe in the introduction, we implement our algorithm in Stan to enable generic inference. Stan implements a model compiler that automatically handles transformations. It works by applying a library of transformations and their corresponding Jacobians to the joint model density.4 This transforms the joint density of any differentiable probability model to the real coordinate space. Now we can choose a variational distribution independent from the model. 2.4 Implicit Non-Gaussian Variational Approximation After the transformation, the latent variables have support on RK. We posit a diagonal (mean-field) Gaussian variational approximation q. I / D N . I ; / D K Y kD1 N .k I k; k/: 3If supp.q/ › supp.p/ then outside the support of p we have KL.q k p/ D EqŒlog q EqŒlog p D 1. 4Stan provides transformations for upper and lower bounds, simplex and ordered vectors, and structured matrices such as covariance matrices and Cholesky factors [4]. 4 0 1 2 3 1 Density (a) Latent variable space T T 1 1 0 1 2 1 (b) Real coordinate space S;! S 1 ;! 2 1 0 1 2 1 Prior Posterior Approximation (c) Standardized space Figure 3: Transformations for advi. The purple line is the posterior. The green line is the approximation. (a) The latent variable space is RC. (a!b) T transforms the latent variable space to R. (b) The variational approximation is a Gaussian. (b!c) S;! absorbs the parameters of the Gaussian. (c) We maximize the elbo in the standardized space, with a fixed standard Gaussian approximation. The vector D .1; ; K; 1; ; K/ contains the mean and standard deviation of each Gaussian factor. This defines our variational approximation in the real coordinate space. (Figure 3b.) The transformation T maps the support of the latent variables to the real coordinate space; its inverse T 1 maps back to the support of the latent variables. This implicitly defines the variational approximation in the original latent variable space as q.T ./ I / ˇˇ det JT ./ ˇˇ: The transformation ensures that the support of this approximation is always bounded by that of the true posterior in the original latent variable space (Figure 3a). Thus we can freely optimize the elbo in the real coordinate space (Figure 3b) without worrying about the support matching constraint. The elbo in the real coordinate space is L.; / D Eq./ log p X; T 1./ C log ˇˇ det JT 1./ ˇˇ C K 2 .1 C log.2// C K X kD1 log k; where we plug in the analytic form of the Gaussian entropy. (The derivation is in Appendix A.) We choose a diagonal Gaussian for efficiency. This choice may call to mind the Laplace approximation technique, where a second-order Taylor expansion around the maximum-a-posteriori estimate gives a Gaussian approximation to the posterior. However, using a Gaussian variational approximation is not equivalent to the Laplace approximation [18]. The Laplace approximation relies on maximizing the probability density; it fails with densities that have discontinuities on its boundary. The Gaussian approximation considers probability mass; it does not suffer this degeneracy. Furthermore, our approach is distinct in another way: because of the transformation, the posterior approximation in the original latent variable space (Figure 3a) is non-Gaussian. 2.5 Automatic Differentiation for Stochastic Optimization We now maximize the elbo in real coordinate space, ; D arg max ; L.; / such that 0: (3) We use gradient ascent to reach a local maximum of the elbo. Unfortunately, we cannot apply automatic differentiation to the elbo in this form. This is because the expectation defines an intractable integral that depends on and ; we cannot directly represent it as a computer program. Moreover, the standard deviations in must remain positive. Thus, we employ one final transformation: elliptical standardization5 [19], shown in Figures 3b and 3c. First re-parameterize the Gaussian distribution with the log of the standard deviation, ! D log./, applied element-wise. The support of ! is now the real coordinate space and is always positive. Then define the standardization D S;!./ D diag exp .!/ 1 . /. The standardization 5Also known as a “co-ordinate transformation” [7], an “invertible transformation” [10], and the “reparameterization trick” [6]. 5 Algorithm 1: Automatic differentiation variational inference (advi) Input: Dataset X D x1WN , model p.X; /. Set iteration counter i D 0 and choose a stepsize sequence .i/. Initialize .0/ D 0 and !.0/ D 0. while change in elbo is above some threshold do Draw M samples m N .0; I/ from the standard multivariate Gaussian. Invert the standardization m D diag.exp .!.i///m C .i/. Approximate rL and r!L using mc integration (Eqs. (4) and (5)). Update .iC1/ .i/ C .i/rL and !.iC1/ !.i/ C .i/r!L. Increment iteration counter. end Return .i/ and ! !.i/. encapsulates the variational parameters and gives the fixed density q. I 0; I/ D N . I 0; I/ D K Y kD1 N .k I 0; 1/: The standardization transforms the variational problem from Eq. (3) into ; ! D arg max ;! L.; !/ D arg max ;! EN . I 0;I/ log p X; T 1.S 1 ;!.// C log ˇˇ det JT 1 S 1 ;!./ˇˇ C K X kD1 !k; where we drop constant terms from the calculation. This expectation is with respect to a standard Gaussian and the parameters and ! are both unconstrained (Figure 3c). We push the gradient inside the expectations and apply the chain rule to get rL D EN ./ r log p.X; /rT 1./ C r log ˇˇ det JT 1./ ˇˇ ; (4) r!kL D EN .k/ rk log p.X; /rkT 1./ C rk log ˇˇ det JT 1./ ˇˇ k exp.!k/ C 1: (5) (The derivations are in Appendix B.) We can now compute the gradients inside the expectation with automatic differentiation. The only thing left is the expectation. mc integration provides a simple approximation: draw M samples from the standard Gaussian and evaluate the empirical mean of the gradients within the expectation [20]. This gives unbiased noisy gradients of the elbo for any differentiable probability model. We can now use these gradients in a stochastic optimization routine to automate variational inference. 2.6 Automatic Variational Inference Equipped with unbiased noisy gradients of the elbo, advi implements stochastic gradient ascent (Algorithm 1). We ensure convergence by choosing a decreasing step-size sequence. In practice, we use an adaptive sequence [21] with finite memory. (See Appendix E for details.) advi has complexity O.2NMK/ per iteration, where M is the number of mc samples (typically between 1 and 10). Coordinate ascent vi has complexity O.2NK/ per pass over the dataset. We scale advi to large datasets using stochastic optimization [3, 10]. The adjustment to Algorithm 1 is simple: sample a minibatch of size B N from the dataset and scale the likelihood of the sampled minibatch by N=B [3]. The stochastic extension of advi has per-iteration complexity O.2BMK/. 6 10 1 100 101 9 7 5 3 Seconds Average Log Predictive ADVI (M=1) ADVI (M=10) NUTS HMC (a) Linear Regression with ard 10 1 100 101 102 1:5 1:3 1:1 0:9 0:7 Seconds Average Log Predictive ADVI (M=1) ADVI (M=10) NUTS HMC (b) Hierarchical Logistic Regression Figure 4: Hierarchical generalized linear models. Comparison of advi to mcmc: held-out predictive likelihood as a function of wall time. 3 Empirical Study We now study advi across a variety of models. We compare its speed and accuracy to two Markov chain Monte Carlo (mcmc) sampling algorithms: Hamiltonian Monte Carlo (hmc) [22] and the noU-turn sampler (nuts)6 [5]. We assess advi convergence by tracking the elbo. To place advi and mcmc on a common scale, we report predictive likelihood on held-out data as a function of time. We approximate the posterior predictive likelihood using a mc estimate. For mcmc, we plug in posterior samples. For advi, we draw samples from the posterior approximation during the optimization. We initialize advi with a draw from a standard Gaussian. We explore two hierarchical regression models, two matrix factorization models, and a mixture model. All of these models have nonconjugate prior structures. We conclude by analyzing a dataset of 250 000 images, where we report results across a range of minibatch sizes B. 3.1 A Comparison to Sampling: Hierarchical Regression Models We begin with two nonconjugate regression models: linear regression with automatic relevance determination (ard) [16] and hierarchical logistic regression [23]. Linear Regression with ard. This is a sparse linear regression model with a hierarchical prior structure. (Details in Appendix F.) We simulate a dataset with 250 regressors such that half of the regressors have no predictive power. We use 10 000 training samples and hold out 1000 for testing. Logistic Regression with Spatial Hierarchical Prior. This is a hierarchical logistic regression model from political science. The prior captures dependencies, such as states and regions, in a polling dataset from the United States 1988 presidential election [23]. (Details in Appendix G.) We train using 10 000 data points and withhold 1536 for evaluation. The regressors contain age, education, state, and region indicators. The dimension of the regression problem is 145. Results. Figure 4 plots average log predictive accuracy as a function of time. For these simple models, all methods reach the same predictive accuracy. We study advi with two settings of M, the number of mc samples used to estimate gradients. A single sample per iteration is sufficient; it is also the fastest. (We set M D 1 from here on.) 3.2 Exploring Nonconjugacy: Matrix Factorization Models We continue by exploring two nonconjugate non-negative matrix factorization models: a constrained Gamma Poisson model [24] and a Dirichlet Exponential model. Here, we show how easy it is to explore new models using advi. In both models, we use the Frey Face dataset, which contains 1956 frames (28 20 pixels) of facial expressions extracted from a video sequence. Constrained Gamma Poisson. This is a Gamma Poisson factorization model with an ordering constraint: each row of the Gamma matrix goes from small to large values. (Details in Appendix H.) 6nuts is an adaptive extension of hmc. It is the default sampler in Stan. 7 101 102 103 104 11 9 7 5 Seconds Average Log Predictive ADVI NUTS (a) Gamma Poisson Predictive Likelihood 101 102 103 104 600 400 200 0 Seconds Average Log Predictive ADVI NUTS (b) Dirichlet Exponential Predictive Likelihood (c) Gamma Poisson Factors (d) Dirichlet Exponential Factors Figure 5: Non-negative matrix factorization of the Frey Faces dataset. Comparison of advi to mcmc: held-out predictive likelihood as a function of wall time. Dirichlet Exponential. This is a nonconjugate Dirichlet Exponential factorization model with a Poisson likelihood. (Details in Appendix I.) Results. Figure 5 shows average log predictive accuracy as well as ten factors recovered from both models. advi provides an order of magnitude speed improvement over nuts (Figure 5a). nuts struggles with the Dirichlet Exponential model (Figure 5b). In both cases, hmc does not produce any useful samples within a budget of one hour; we omit hmc from the plots. 3.3 Scaling to Large Datasets: Gaussian Mixture Model We conclude with the Gaussian mixture model (gmm) example we highlighted earlier. This is a nonconjugate gmm applied to color image histograms. We place a Dirichlet prior on the mixture proportions, a Gaussian prior on the component means, and a lognormal prior on the standard deviations. (Details in Appendix J.) We explore the imageclef dataset, which has 250 000 images [25]. We withhold 10 000 images for evaluation. In Figure 1a we randomly select 1000 images and train a model with 10 mixture components. nuts struggles to find an adequate solution and hmc fails altogether. This is likely due to label switching, which can affect hmc-based techniques in mixture models [4]. Figure 1b shows advi results on the full dataset. Here we use advi with stochastic subsampling of minibatches from the dataset [3]. We increase the number of mixture components to 30. With a minibatch size of 500 or larger, advi reaches high predictive accuracy. Smaller minibatch sizes lead to suboptimal solutions, an effect also observed in [3]. advi converges in about two hours. 4 Conclusion We develop automatic differentiation variational inference (advi) in Stan. advi leverages automatic transformations, an implicit non-Gaussian variational approximation, and automatic differentiation. This is a valuable tool. We can explore many models and analyze large datasets with ease. We emphasize that advi is currently available as part of Stan; it is ready for anyone to use. Acknowledgments We thank Dustin Tran, Bruno Jacobs, and the reviewers for their comments. This work is supported by NSF IIS-0745520, IIS-1247664, IIS-1009542, SES-1424962, ONR N00014-11-1-0651, DARPA FA8750-14-2-0009, N66001-15-C-4032, Sloan G-2015-13987, IES DE R305D140059, NDSEG, Facebook, Adobe, Amazon, and the Siebel Scholar and John Templeton Foundations. 8 References [1] Michael I Jordan, Zoubin Ghahramani, Tommi S Jaakkola, and Lawrence K Saul. An introduction to variational methods for graphical models. Machine Learning, 37(2):183–233, 1999. [2] Martin J Wainwright and Michael I Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008. [3] Matthew D Hoffman, David M Blei, Chong Wang, and John Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14(1):1303–1347, 2013. [4] Stan Development Team. Stan Modeling Language Users Guide and Reference Manual, 2015. [5] Matthew D Hoffman and Andrew Gelman. The No-U-Turn sampler. The Journal of Machine Learning Research, 15(1):1593–1623, 2014. [6] Diederik Kingma and Max Welling. Auto-encoding variational Bayes. arXiv:1312.6114, 2013. [7] Danilo J Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In ICML, pages 1278–1286, 2014. [8] Rajesh Ranganath, Sean Gerrish, and David Blei. Black box variational inference. In AISTATS, pages 814–822, 2014. [9] Tim Salimans and David Knowles. On using control variates with stochastic approximation for variational Bayes. arXiv preprint arXiv:1401.1022, 2014. [10] Michalis Titsias and Miguel Lázaro-Gredilla. Doubly stochastic variational Bayes for nonconjugate inference. In ICML, pages 1971–1979, 2014. [11] David Wingate and Theophane Weber. Automated variational inference in probabilistic programming. arXiv preprint arXiv:1301.1299, 2013. [12] Noah D Goodman, Vikash K Mansinghka, Daniel Roy, Keith Bonawitz, and Joshua B Tenenbaum. Church: A language for generative models. In UAI, pages 220–229, 2008. [13] Vikash Mansinghka, Daniel Selsam, and Yura Perov. Venture: a higher-order probabilistic programming platform with programmable inference. arXiv:1404.0099, 2014. [14] Frank Wood, Jan Willem van de Meent, and Vikash Mansinghka. A new approach to probabilistic programming inference. In AISTATS, pages 2–46, 2014. [15] John M Winn and Christopher M Bishop. Variational message passing. In Journal of Machine Learning Research, pages 661–694, 2005. [16] Christopher M Bishop. Pattern Recognition and Machine Learning. Springer New York, 2006. [17] David J Olive. Statistical Theory and Inference. Springer, 2014. [18] Manfred Opper and Cédric Archambeau. The variational Gaussian approximation revisited. Neural computation, 21(3):786–792, 2009. [19] Wolfgang Härdle and Léopold Simar. Applied multivariate statistical analysis. Springer, 2012. [20] Christian P Robert and George Casella. Monte Carlo statistical methods. Springer, 1999. [21] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159, 2011. [22] Mark Girolami and Ben Calderhead. Riemann manifold langevin and hamiltonian monte carlo methods. Journal of the Royal Statistical Society: Series B, 73(2):123–214, 2011. [23] Andrew Gelman and Jennifer Hill. Data analysis using regression and multilevel/hierarchical models. Cambridge University Press, 2006. [24] John Canny. GaP: a factor model for discrete data. In ACM SIGIR, pages 122–129. ACM, 2004. [25] Mauricio Villegas, Roberto Paredes, and Bart Thomee. Overview of the ImageCLEF 2013 Scalable Concept Image Annotation Subtask. In CLEF Evaluation Labs and Workshop, 2013. 9 | 2015 | 93 |
5,987 | Extending Gossip Algorithms to Distributed Estimation of U-Statistics Igor Colin, Joseph Salmon, St´ephan Cl´emenc¸on LTCI, CNRS, T´el´ecom ParisTech Universit´e Paris-Saclay 75013 Paris, France first.last@telecom-paristech.fr Aur´elien Bellet Magnet Team INRIA Lille - Nord Europe 59650 Villeneuve d’Ascq, France aurelien.bellet@inria.fr Abstract Efficient and robust algorithms for decentralized estimation in networks are essential to many distributed systems. Whereas distributed estimation of sample mean statistics has been the subject of a good deal of attention, computation of Ustatistics, relying on more expensive averaging over pairs of observations, is a less investigated area. Yet, such data functionals are essential to describe global properties of a statistical population, with important examples including Area Under the Curve, empirical variance, Gini mean difference and within-cluster point scatter. This paper proposes new synchronous and asynchronous randomized gossip algorithms which simultaneously propagate data across the network and maintain local estimates of the U-statistic of interest. We establish convergence rate bounds of O(1/t) and O(log t/t) for the synchronous and asynchronous cases respectively, where t is the number of iterations, with explicit data and network dependent terms. Beyond favorable comparisons in terms of rate analysis, numerical experiments provide empirical evidence the proposed algorithms surpasses the previously introduced approach. 1 Introduction Decentralized computation and estimation have many applications in sensor and peer-to-peer networks as well as for extracting knowledge from massive information graphs such as interlinked Web documents and on-line social media. Algorithms running on such networks must often operate under tight constraints: the nodes forming the network cannot rely on a centralized entity for communication and synchronization, without being aware of the global network topology and/or have limited resources (computational power, memory, energy). Gossip algorithms [19, 18, 5], where each node exchanges information with at most one of its neighbors at a time, have emerged as a simple yet powerful technique for distributed computation in such settings. Given a data observation on each node, gossip algorithms can be used to compute averages or sums of functions of the data that are separable across observations (see for example [10, 2, 15, 11, 9] and references therein). Unfortunately, these algorithms cannot be used to efficiently compute quantities that take the form of an average over pairs of observations, also known as U-statistics [12]. Among classical U-statistics used in machine learning and data mining, one can mention, among others: the sample variance, the Area Under the Curve (AUC) of a classifier on distributed data, the Gini mean difference, the Kendall tau rank correlation coefficient, the within-cluster point scatter and several statistical hypothesis test statistics such as Wilcoxon Mann-Whitney [14]. In this paper, we propose randomized synchronous and asynchronous gossip algorithms to efficiently compute a U-statistic, in which each node maintains a local estimate of the quantity of interest throughout the execution of the algorithm. Our methods rely on two types of iterative information exchange in the network: propagation of local observations across the network, and averaging of lo1 cal estimates. We show that the local estimates generated by our approach converge in expectation to the value of the U-statistic at rates of O(1/t) and O(log t/t) for the synchronous and asynchronous versions respectively, where t is the number of iterations. These convergence bounds feature datadependent terms that reflect the hardness of the estimation problem, and network-dependent terms related to the spectral gap of the network graph [3], showing that our algorithms are faster on wellconnected networks. The proofs rely on an original reformulation of the problem using “phantom nodes”, i.e., on additional nodes that account for data propagation in the network. Our results largely improve upon those presented in [17]: in particular, we achieve faster convergence together with lower memory and communication costs. Experiments conducted on AUC and within-cluster point scatter estimation using real data confirm the superiority of our approach. The rest of this paper is organized as follows. Section 2 introduces the problem of interest as well as relevant notation. Section 3 provides a brief review of the related work in gossip algorithms. We then describe our approach along with the convergence analysis in Section 4, both in the synchronous and asynchronous settings. Section 5 presents our numerical results. 2 Background 2.1 Definitions and Notations For any integer p > 0, we denote by [p] the set {1, . . . , p} and by |F| the cardinality of any finite set F. We represent a network of size n > 0 as an undirected graph G = (V, E), where V = [n] is the set of vertices and E ⊆V × V the set of edges. We denote by A(G) the adjacency matrix related to the graph G, that is for all (i, j) ∈V 2, [A(G)]ij = 1 if and only if (i, j) ∈E. For any node i ∈V , we denote its degree by di = |{j : (i, j) ∈E}|. We denote by L(G) the graph Laplacian of G, defined by L(G) = D(G) −A(G) where D(G) = diag(d1, . . . , dn) is the matrix of degrees. A graph G = (V, E) is said to be connected if for all (i, j) ∈V 2 there exists a path connecting i and j; it is bipartite if there exist S, T ⊂V such that S ∪T = V , S ∩T = ∅and E ⊆(S × T) ∪(T × S). A matrix M ∈Rn×n is nonnegative (resp. positive) if and only if for all (i, j) ∈[n]2, [M]ij ≥0, (resp. [M]ij > 0). We write M ≥0 (resp. M > 0) when this holds. The transpose of M is denoted by M ⊤. A matrix P ∈Rn×n is stochastic if and only if P ≥0 and P1n = 1n, where 1n = (1, . . . , 1)⊤∈Rn. The matrix P ∈Rn×n is bi-stochastic if and only if P and P ⊤are stochastic. We denote by In the identity matrix in Rn×n, (e1, . . . , en) the standard basis in Rn, I{E} the indicator function of an event E and ∥· ∥the usual ℓ2 norm. 2.2 Problem Statement Let X be an input space and (X1, . . . , Xn) ∈X n a sample of n ≥2 points in that space. We assume X ⊆Rd for some d > 0 throughout the paper, but our results straightforwardly extend to the more general setting. We denote as X = (X1, . . . , Xn)⊤the design matrix. Let H : X × X →R be a measurable function, symmetric in its two arguments and with H(X, X) = 0, ∀X ∈X. We consider the problem of estimating the following quantity, known as a degree two U-statistic [12]:1 ˆUn(H) = 1 n2 n X i,j=1 H(Xi, Xj). (1) In this paper, we illustrate the interest of U-statistics on two applications, among many others. The first one is the within-cluster point scatter [4], which measures the clustering quality of a partition P of X as the average distance between points in each cell C ∈P. It is of the form (1) with HP(X, X′) = ∥X −X′∥· X C∈P I{(X,X′)∈C2}. (2) We also study the AUC measure [8]. For a given sample (X1, ℓ1), . . . , (Xn, ℓn) on X × {−1, +1}, the AUC measure of a linear classifier θ ∈Rd−1 is given by: AUC(θ) = P 1≤i,j≤n(1 −ℓiℓj)I{ℓi(θ⊤Xi)>−ℓj(θ⊤Xj)} 4 P 1≤i≤n I{ℓi=1} P 1≤i≤n I{ℓi=−1} . (3) 1We point out that the usual definition of U-statistic differs slightly from (1) by a factor of n/(n −1). 2 Algorithm 1 GoSta-sync: a synchronous gossip algorithm for computing a U-statistic Require: Each node k holds observation Xk 1: Each node k initializes its auxiliary observation Yk = Xk and its estimate Zk = 0 2: for t = 1, 2, . . . do 3: for p = 1, . . . , n do 4: Set Zp ←t−1 t Zp + 1 t H(Xp, Yp) 5: end for 6: Draw (i, j) uniformly at random from E 7: Set Zi, Zj ←1 2(Zi + Zj) 8: Swap auxiliary observations of nodes i and j: Yi ↔Yj 9: end for This score is the probability for a classifier to rank a positive observation higher than a negative one. We focus here on the decentralized setting, where the data sample is partitioned across a set of nodes in a network. For simplicity, we assume V = [n] and each node i ∈V only has access to a single data observation Xi.2 We are interested in estimating (1) efficiently using a gossip algorithm. 3 Related Work Gossip algorithms have been extensively studied in the context of decentralized averaging in networks, where the goal is to compute the average of n real numbers (X = R): ¯Xn = 1 n n X i=1 Xi = 1 nX⊤1n. (4) One of the earliest work on this canonical problem is due to [19], but more efficient algorithms have recently been proposed, see for instance [10, 2]. Of particular interest to us is the work of [2], which introduces a randomized gossip algorithm for computing the empirical mean (4) in a context where nodes wake up asynchronously and simply average their local estimate with that of a randomly chosen neighbor. The communication probabilities are given by a stochastic matrix P, where pij is the probability that a node i selects neighbor j at a given iteration. As long as the network graph is connected and non-bipartite, the local estimates converge to (4) at a rate O(e−ct) where the constant c can be tied to the spectral gap of the network graph [3], showing faster convergence for well-connected networks.3 Such algorithms can be extended to compute other functions such as maxima and minima, or sums of the form Pn i=1 f(Xi) for some function f : X →R (as done for instance in [15]). Some work has also gone into developing faster gossip algorithms for poorly connected networks, assuming that nodes know their (partial) geographic location [6, 13]. For a detailed account of the literature on gossip algorithms, we refer the reader to [18, 5]. However, existing gossip algorithms cannot be used to efficiently compute (1) as it depends on pairs of observations. To the best of our knowledge, this problem has only been investigated in [17]. Their algorithm, coined U2-gossip, achieves O(1/t) convergence rate but has several drawbacks. First, each node must store two auxiliary observations, and two pairs of nodes must exchange an observation at each iteration. For high-dimensional problems (large d), this leads to a significant memory and communication load. Second, the algorithm is not asynchronous as every node must update its estimate at each iteration. Consequently, nodes must have access to a global clock, which is often unrealistic in practice. In the next section, we introduce new synchronous and asynchronous algorithms with faster convergence as well as smaller memory and communication cost per iteration. 4 GoSta Algorithms In this section, we introduce gossip algorithms for computing (1). Our approach is based on the observation that ˆUn(H) = 1/n Pn i=1 hi, with hi = 1/n Pn j=1 H(Xi, Xj), and we write h = (h1, . . . , hn)⊤. The goal is thus similar to the usual distributed averaging problem (4), with the 2Our results generalize to the case where each node holds a subset of the observations (see Section 4). 3For the sake of completeness, we provide an analysis of this algorithm in the supplementary material. 3 (a) Original graph G. (b) New graph ˜G. Figure 1: Comparison of original network and “phantom network”. key difference that each local value hi is itself an average depending on the entire data sample. Consequently, our algorithms will combine two steps at each iteration: a data propagation step to allow each node i to estimate hi, and an averaging step to ensure convergence to the desired value ˆUn(H). We first present the algorithm and its analysis for the (simpler) synchronous setting in Section 4.1, before introducing an asynchronous version (Section 4.2). 4.1 Synchronous Setting In the synchronous setting, we assume that the nodes have access to a global clock so that they can all update their estimate at each time instance. We stress that the nodes need not to be aware of the global network topology as they will only interact with their direct neighbors in the graph. Let us denote by Zk(t) the (local) estimate of ˆUn(H) by node k at iteration t. In order to propagate data across the network, each node k maintains an auxiliary observation Yk, initialized to Xk. Our algorithm, coined GoSta, goes as follows. At each iteration, each node k updates its local estimate by taking the running average of Zk(t) and H(Xk, Yk). Then, an edge of the network is drawn uniformly at random, and the corresponding pair of nodes average their local estimates and swap their auxiliary observations. The observations are thus each performing a random walk (albeit coupled) on the network graph. The full procedure is described in Algorithm 1. In order to prove the convergence of Algorithm 1, we consider an equivalent reformulation of the problem which allows us to model the data propagation and the averaging steps separately. Specifically, for each k ∈V , we define a phantom Gk = (Vk, Ek) of the original network G, with Vk = {vk i ; 1 ≤i ≤n} and Ek = {(vk i , vk j ); (i, j) ∈E}. We then create a new graph ˜G = ( ˜V , ˜E) where each node k ∈V is connected to its counterpart vk k ∈Vk: ˜V = V ∪(∪n k=1Vk) ˜E = E ∪(∪n k=1Ek) ∪{(k, vk k); k ∈V } The construction of ˜G is illustrated in Figure 1. In this new graph, the nodes V from the original network will hold the estimates Z1(t), . . . , Zn(t) as described above. The role of each Gk is to simulate the data propagation in the original graph G. For i ∈[n], vk i ∈V k initially holds the value H(Xk, Xi). At each iteration, we draw a random edge (i, j) of G and nodes vk i and vk j swap their value for all k ∈[n]. To update its estimate, each node k will use the current value at vk k. We can now represent the system state at iteration t by a vector S(t) = (S1(t)⊤, S2(t)⊤)⊤∈ Rn+n2. The first n coefficients, S1(t), are associated with nodes in V and correspond to the estimate vector Z(t) = [Z1(t), . . . , Zn(t)]⊤. The last n2 coefficients, S2(t), are associated with nodes in (Vk)1≤k≤n and represent the data propagation in the network. Their initial value is set to S2(0) = (e⊤ 1 H, . . . , e⊤ n H) so that for any (k, l) ∈[n]2, node vk l initially stores the value H(Xk, Xl). Remark 1. The “phantom network” ˜G is of size O(n2), but we stress the fact that it is used solely as a tool for the convergence analysis: Algorithm 1 operates on the original graph G. The transition matrix of this system accounts for three events: the averaging step (the action of G on itself), the data propagation (the action of Gk on itself for all k ∈V ) and the estimate update 4 (the action of Gk on node k for all k ∈V ). At a given step t > 0, we are interested in characterizing the transition matrix M(t) such that E[S(t + 1)] = M(t)E[S(t)]. For the sake of clarity, we write M(t) as an upper block-triangular (n + n2) × (n + n2) matrix: M(t) = M1(t) M2(t) 0 M3(t) , (5) with M1(t) ∈Rn×n, M2(t) ∈Rn×n2 and M3(t) ∈Rn2×n2. The bottom left part is necessarily 0, because G does not influence any Gk. The upper left M1(t) block corresponds to the averaging step; therefore, for any t > 0, we have: M1(t) = t −1 t · 1 |E| X (i,j)∈E In −1 2(ei −ej)(ei −ej)⊤ = t −1 t W2 (G) , where for any α > 1, Wα(G) is defined by: Wα(G) = 1 |E| X (i,j)∈E In −1 α(ei −ej)(ei −ej)⊤ = In − 2 α|E|L(G). (6) Furthermore, M2(t) and M3(t) are defined as follows: M2(t) = 1 t e⊤ 1 0 · · · 0 0 ... ... ... ... 0 0 · · · 0 e⊤ n | {z } B and M3(t) = W1 (G) 0 · · · 0 0 ... ... ... ... ... 0 · · · 0 W1 (G) | {z } C , where M2(t) is a block diagonal matrix corresponding to the observations being propagated, and M3(t) represents the estimate update for each node k. Note that M3(t) = W1 (G) ⊗In where ⊗is the Kronecker product. We can now describe the expected state evolution. At iteration t = 0, one has: E[S(1)] = M(1)E[S(0)] = M(1)S(0) = 0 B 0 C 0 S2(0) = BS2(0) CS2(0) . (7) Using recursion, we can write: E[S(t)] = M(t)M(t −1) . . . M(1)S(0) = 1 t Pt s=1 W2 (G)t−s BCs−1S2(0) CtS2(0) . (8) Therefore, in order to prove the convergence of Algorithm 1, one needs to show that limt→+∞1 t Pt s=1 W2 (G)t−s BCs−1S2(0) = ˆUn(H)1n. We state this precisely in the next theorem. Theorem 1. Let G be a connected and non-bipartite graph with n nodes, X ∈Rn×d a design matrix and (Z(t)) the sequence of estimates generated by Algorithm 1. For all k ∈[n], we have: lim t→+∞E[Zk(t)] = 1 n2 X 1≤i,j≤n H(Xi, Xj) = ˆUn(H). (9) Moreover, for any t > 0,
E[Z(t)] −ˆUn(H)1n
≤1 ct
h −ˆUn(H)1n
+ 2 ct + e−ct
H −h1⊤ n
, where c = c(G) := 1 −λ2(2) and λ2(2) is the second largest eigenvalue of W2 (G). Proof. See supplementary material. Theorem 1 shows that the local estimates generated by Algorithm 1 converge to ˆUn(H) at a rate O(1/t). Furthermore, the constants reveal the rate dependency on the particular problem instance. Indeed, the two norm terms are data-dependent and quantify the difficulty of the estimation problem itself through a dispersion measure. In contrast, c(G) is a network-dependent term since 1−λ2(2) = βn−1/|E|, where βn−1 is the second smallest eigenvalue of the graph Laplacian L(G) (see Lemma 1 in the supplementary material). The value βn−1 is also known as the spectral gap of G and graphs with a larger spectral gap typically have better connectivity [3]. This will be illustrated in Section 5. 5 Algorithm 2 GoSta-async: an asynchronous gossip algorithm for computing a U-statistic Require: Each node k holds observation Xk and pk = 2dk/|E| 1: Each node k initializes Yk = Xk, Zk = 0 and mk = 0 2: for t = 1, 2, . . . do 3: Draw (i, j) uniformly at random from E 4: Set mi ←mi + 1/pi and mj ←mj + 1/pj 5: Set Zi, Zj ←1 2(Zi + Zj) 6: Set Zi ←(1 − 1 pimi )Zi + 1 pimi H(Xi, Yi) 7: Set Zj ←(1 − 1 pjmj )Zj + 1 pjmj H(Xj, Yj) 8: Swap auxiliary observations of nodes i and j: Yi ↔Yj 9: end for Comparison to U2-gossip. To estimate ˆUn(H), U2-gossip [17] does not use averaging. Instead, each node k requires two auxiliary observations Y (1) k and Y (2) k which are both initialized to Xk. At each iteration, each node k updates its local estimate by taking the running average of Zk and H(Y (1) k , Y (2) k ). Then, two random edges are selected: the nodes connected by the first (resp. second) edge swap their first (resp. second) auxiliary observations. A precise statement of the algorithm is provided in the supplementary material. U2-gossip has several drawbacks compared to GoSta: it requires initiating communication between two pairs of nodes at each iteration, and the amount of communication and memory required is higher (especially when data is high-dimensional). Furthermore, applying our convergence analysis to U2-gossip, we obtain the following refined rate:4
E[Z(t)] −ˆUn(H)1n
≤ √n t 2 1 −λ2(1)
h −ˆUn(H)1n
+ 1 1 −λ2(1)2
H −h1⊤ n
, (10) where 1 −λ2(1) = 2(1 −λ2(2)) = 2c(G) and λ2(1) is the second largest eigenvalue of W1(G). The advantage of propagating two observations in U2-gossip is seen in the 1/(1 −λ2(1)2) term, however the absence of averaging leads to an overall √n factor. Intuitively, this is because nodes do not benefit from each other’s estimates. In practice, λ2(2) and λ2(1) are close to 1 for reasonablysized networks (for instance, λ2(2) = 1 −1/n for the complete graph), so the square term does not provide much gain and the √n factor dominates in (10). We thus expect U2-gossip to converge slower than GoSta, which is confirmed by the numerical results presented in Section 5. 4.2 Asynchronous Setting In practical settings, nodes may not have access to a global clock to synchronize the updates. In this section, we remove the global clock assumption and propose a fully asynchronous algorithm where each node has a local clock, ticking at a rate 1 Poisson process. Yet, local clocks are i.i.d. so one can use an equivalent model with a global clock ticking at a rate n Poisson process and a random edge draw at each iteration, as in synchronous setting (one may refer to [2] for more details on clock modeling). However, at a given iteration, the estimate update step now only involves the selected pair of nodes. Therefore, the nodes need to maintain an estimate of the current iteration number to ensure convergence to an unbiased estimate of ˆUn(H). Hence for all k ∈[n], let pk ∈[0, 1] denote the probability of node k being picked at any iteration. With our assumption that nodes activate with a uniform distribution over E, pk = 2dk/|E|. Moreover, the number of times a node k has been selected at a given iteration t > 0 follows a binomial distribution with parameters t and pk. Let us define mk(t) such that mk(0) = 0 and for t > 0: mk(t) = mk(t −1) + 1 pk if k is picked at iteration t, mk(t −1) otherwise. (11) For any k ∈[n] and any t > 0, one has E[mk(t)] = t × pk × 1/pk = t. Therefore, given that every node knows its degree and the total number of edges in the network, the iteration estimates are unbiased. We can now give an asynchronous version of GoSta, as stated in Algorithm 2. To show that local estimates converge to ˆUn(H), we use a similar model as in the synchronous setting. The time dependency of the transition matrix is more complex ; so is the upper bound. 4The proof can be found in the supplementary material. 6 Dataset Complete graph Watts-Strogatz 2d-grid graph Wine Quality (n = 1599) 6.26 · 10−4 2.72 · 10−5 3.66 · 10−6 SVMguide3 (n = 1260) 7.94 · 10−4 5.49 · 10−5 6.03 · 10−6 Table 1: Value of 1 −λ2(2) for each network. Theorem 2. Let G be a connected and non bipartite graph with n nodes, X ∈Rn×d a design matrix and (Z(t)) the sequence of estimates generated by Algorithm 2. For all k ∈[n], we have: lim t→+∞E[Zk(t)] = 1 n2 X 1≤i,j≤n H(Xi, Xj) = ˆUn(H). (12) Moreover, there exists a constant c′(G) > 0 such that, for any t > 1,
E[Z(t)] −ˆUn(H)1n
≤c′(G) · log t t ∥H∥. (13) Proof. See supplementary material. Remark 2. Our methods can be extended to the situation where nodes contain multiple observations: when drawn, a node will pick a random auxiliary observation to swap. Similar convergence results are achieved by splitting each node into a set of nodes, each containing only one observation and new edges weighted judiciously. 5 Experiments In this section, we present two applications on real datasets: the decentralized estimation of the Area Under the ROC Curve (AUC) and of the within-cluster point scatter. We compare the performance of our algorithms to that of U2-gossip [17] — see supplementary material for additional comparisons to some baseline methods. We perform our simulations on the three types of network described below (corresponding values of 1 −λ2(2) are shown in Table 1). • Complete graph: This is the case where all nodes are connected to each other. It is the ideal situation in our framework, since any pair of nodes can communicate directly. For a complete graph G of size n > 0, 1 −λ2(2) = 1/n, see [1, Ch.9] or [3, Ch.1] for details. • Two-dimensional grid: Here, nodes are located on a 2D grid, and each node is connected to its four neighbors on the grid. This network offers a regular graph with isotropic communication, but its diameter (√n) is quite high, especially in comparison to usual scale-free networks. • Watts-Strogatz: This random network generation technique is introduced in [20] and allows us to create networks with various communication properties. It relies on two parameters: the average degree of the network k and a rewiring probability p. In expectation, the higher the rewiring probability, the better the connectivity of the network. Here, we use k = 5 and p = 0.3 to achieve a connectivity compromise between the complete graph and the two-dimensional grid. AUC measure. We first focus on the AUC measure of a linear classifier θ as defined in (3). We use the SMVguide3 binary classification dataset which contains n = 1260 points in d = 23 dimensions.5 We set θ to the difference between the class means. For each generated network, we perform 50 runs of GoSta-sync (Algorithm 1) and U2-gossip. The top row of Figure 2 shows the evolution over time of the average relative error and the associated standard deviation across nodes for both algorithms on each type of network. On average, GoSta-sync outperforms U2-gossip on every network. The variance of the estimates across nodes is also lower due to the averaging step. Interestingly, the performance gap between the two algorithms is greatly increasing early on, presumably because the exponential term in the convergence bound of GoSta-sync is significant in the first steps. Within-cluster point scatter. We then turn to the within-cluster point scatter defined in (2). We use the Wine Quality dataset which contains n = 1599 points in d = 12 dimensions, with a total of K = 11 classes.6 We focus on the partition P associated to class centroids and run the aforementioned 5This dataset is available at http://mldata.org/repository/data/viewslug/svmguide3/ 6This dataset is available at https://archive.ics.uci.edu/ml/datasets/Wine 7 Figure 2: Evolution of the average relative error (solid line) and its standard deviation (filled area) with the number of iterations for U2-gossip (red) and Algorithm 1 (blue) on the SVMguide3 dataset (top row) and the Wine Quality dataset (bottom row). (a) 20% error reaching time. (b) Average relative error. Figure 3: Panel (a) shows the average number of iterations needed to reach an relative error below 0.2, for several network sizes n ∈[50, 1599]. Panel (b) compares the relative error (solid line) and its standard deviation (filled area) of synchronous (blue) and asynchronous (red) versions of GoSta. methods 50 times. The results are shown in the bottom row of Figure 2. As in the case of AUC, GoSta-sync achieves better perfomance on all types of networks, both in terms of average error and variance. In Figure 3a, we show the average time needed to reach a 0.2 relative error on a complete graph ranging from n = 50 to n = 1599. As predicted by our analysis, the performance gap widens in favor of GoSta as the size of the graph increases. Finally, we compare the performance of GoSta-sync and GoSta-async (Algorithm 2) in Figure 3b. Despite the slightly worse theoretical convergence rate for GoSta-async, both algorithms have comparable performance in practice. 6 Conclusion We have introduced new synchronous and asynchronous randomized gossip algorithms to compute statistics that depend on pairs of observations (U-statistics). We have proved the convergence rate in both settings, and numerical experiments confirm the practical interest of the proposed algorithms. In future work, we plan to investigate whether adaptive communication schemes (such as those of [6, 13]) can be used to speed-up our algorithms. Our contribution could also be used as a building block for decentralized optimization of U-statistics, extending for instance the approaches of [7, 16]. Acknowledgements This work was supported by the chair Machine Learning for Big Data of T´el´ecom ParisTech, and was conducted when A. Bellet was affiliated with T´el´ecom ParisTech. 8 References [1] B´ela Bollob´as. Modern Graph Theory, volume 184. Springer, 1998. [2] Stephen P. Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. Randomized gossip algorithms. IEEE Transactions on Information Theory, 52(6):2508–2530, 2006. [3] Fan R. K. Chung. Spectral Graph Theory, volume 92. American Mathematical Society, 1997. [4] St´ephan Cl´emenc¸on. On U-processes and clustering performance. In Advances in Neural Information Processing Systems 24, pages 37–45, 2011. [5] Alexandros G. Dimakis, Soummya Kar, Jos´e M. F. Moura, Michael G. Rabbat, and Anna Scaglione. Gossip Algorithms for Distributed Signal Processing. Proceedings of the IEEE, 98(11):1847–1864, 2010. [6] Alexandros G. Dimakis, Anand D. Sarwate, and Martin J. Wainwright. Geographic Gossip: Efficient Averaging for Sensor Networks. IEEE Transactions on Signal Processing, 56(3):1205– 1216, 2008. [7] John C. Duchi, Alekh Agarwal, and Martin J. Wainwright. Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling. IEEE Transactions on Automatic Control, 57(3):592–606, 2012. [8] James A. Hanley and Barbara J. McNeil. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology, 143(1):29–36, 1982. [9] Richard Karp, Christian Schindelhauer, Scott Shenker, and Berthold Vocking. Randomized rumor spreading. In Symposium on Foundations of Computer Science, pages 565–574. IEEE, 2000. [10] David Kempe, Alin Dobra, and Johannes Gehrke. Gossip-Based Computation of Aggregate Information. In Symposium on Foundations of Computer Science, pages 482–491. IEEE, 2003. [11] Wojtek Kowalczyk and Nikos A. Vlassis. Newscast EM. In Advances in Neural Information Processing Systems, pages 713–720, 2004. [12] Alan J. Lee. U-Statistics: Theory and Practice. Marcel Dekker, New York, 1990. [13] Wenjun Li, Huaiyu Dai, and Yanbing Zhang. Location-Aided Fast Distributed Consensus in Wireless Networks. IEEE Transactions on Information Theory, 56(12):6208–6227, 2010. [14] Henry B. Mann and Donald R. Whitney. On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other. Annals of Mathematical Statistics, 18(1):50–60, 1947. [15] Damon Mosk-Aoyama and Devavrat Shah. Fast distributed algorithms for computing separable functions. IEEE Transactions on Information Theory, 54(7):2997–3007, 2008. [16] Angelia Nedic and Asuman Ozdaglar. Distributed subgradient methods for multi-agent optimization. IEEE Transactions on Automatic Control, 54(1):48–61, 2009. [17] Kristiaan Pelckmans and Johan Suykens. Gossip Algorithms for Computing U-Statistics. In IFAC Workshop on Estimation and Control of Networked Systems, pages 48–53, 2009. [18] Devavrat Shah. Gossip Algorithms. Foundations and Trends in Networking, 3(1):1–125, 2009. [19] John N. Tsitsiklis. Problems in decentralized decision making and computation. PhD thesis, Massachusetts Institute of Technology, 1984. [20] Duncan J Watts and Steven H Strogatz. Collective dynamics of ‘small-world’networks. Nature, 393(6684):440–442, 1998. 9 | 2015 | 94 |
5,988 | Model-Based Relative Entropy Stochastic Search Abbas Abdolmaleki1,2,3, Rudolf Lioutikov4, Nuno Lau1, Luis Paulo Reis2,3, Jan Peters4,6, and Gerhard Neumann5 1: IEETA, University of Aveiro, Aveiro, Portugal 2: DSI, University of Minho, Braga, Portugal 3: LIACC, University of Porto, Porto, Portugal 4: IAS, 5: CLAS, TU Darmstadt, Darmstadt, Germany 6: Max Planck Institute for Intelligent Systems, Stuttgart, Germany {Lioutikov,peters,neumann}@ias.tu-darmstadt.de {abbas.a, nunolau}@ua.pt, lpreis@dsi.uminho.pt Abstract Stochastic search algorithms are general black-box optimizers. Due to their ease of use and their generality, they have recently also gained a lot of attention in operations research, machine learning and policy search. Yet, these algorithms require a lot of evaluations of the objective, scale poorly with the problem dimension, are affected by highly noisy objective functions and may converge prematurely. To alleviate these problems, we introduce a new surrogate-based stochastic search approach. We learn simple, quadratic surrogate models of the objective function. As the quality of such a quadratic approximation is limited, we do not greedily exploit the learned models. The algorithm can be misled by an inaccurate optimum introduced by the surrogate. Instead, we use information theoretic constraints to bound the ‘distance’ between the new and old data distribution while maximizing the objective function. Additionally the new method is able to sustain the exploration of the search distribution to avoid premature convergence. We compare our method with state of art black-box optimization methods on standard uni-modal and multi-modal optimization functions, on simulated planar robot tasks and a complex robot ball throwing task. The proposed method considerably outperforms the existing approaches. 1 Introduction Stochastic search algorithms [1, 2, 3, 4] are black box optimizers of an objective function that is either unknown or too complex to be modeled explicitly. These algorithms only make weak assumption on the structure of underlying objective function. They only use the objective values and don’t require gradients or higher derivatives of the objective function. Therefore, they are well suited for black box optimization problems. Stochastic search algorithms typically maintain a stochastic search distribution over parameters of the objective function, which is typically a multivariate Gaussian distribution [1, 2, 3]. This policy is used to create samples from the objective function. Subsequently, a new stochastic search distribution is computed by either computing gradient based updates [2, 4, 5], evolutionary strategies [1], the cross-entropy method [7], path integrals [3, 8], or information-theoretic policy updates [9]. Information-theoretic policy updates [10, 9, 2] bound the relative entropy (also called Kullback Leibler or KL divergence) between two subsequent policies. Using a KL-bound for the update of the search distribution is a common approach in the stochastic search. However, such information theoretic bounds could so far only be approximately applied either by using Taylor-expansions of the KL-divergence resulting in natural evolutionary strategies (NES) [2, 11], or sample-based approximations, resulting in the relative entropy policy search 1 (REPS) [9] algorithm. In this paper, we present a novel stochastic search algorithm which is called MOdel-based Relative-Entropy stochastic search (MORE). For the first time, our algorithm bounds the KL divergence of the new and old search distribution in closed form without approximations. We show that this exact bound performs considerably better than approximated KL bounds. In order to do so, we locally learn a simple, quadratic surrogate of the objective function. The quadratic surrogate allows us to compute the new search distribution analytically where the KL divergence of the new and old distribution is bounded. Therefore, we only exploit the surrogate model locally which prevents the algorithm to be misled by inaccurate optima introduced by an inaccurate surrogate model. However, learning quadratic reward models directly in parameter space comes with the burden of quadratically many parameters that need to be estimated. We therefore investigate new methods that rely on dimensionality reduction for learning such surrogate models. In order to avoid over-fitting, we use a supervised Bayesian dimensionality reduction approach. This dimensionality reduction technique avoids over fitting, which makes the algorithm applicable also to high dimensional problems. In addition to solving the search distribution update in closed form, we also upper-bound the entropy of the new search distribution to ensure that exploration is sustained in the search distribution throughout the learning progress, and, hence, premature convergence is avoided. We will show that this method is more effective than commonly used heuristics that also enforce exploration, for example, adding a small diagonal matrix to the estimated covariance matrix [3]. We provide a comparison of stochastic search algorithms on standard objective functions used for benchmarking and in simulated robotics tasks. The results show that MORE considerably outperforms state-of-the-art methods. 1.1 Problem Statement We want to maximize an objective function R(θ) : Rn →R. The goal is to find one or more parameter vectors θ ∈Rn which have the highest possible objective value. We maintain a search distribution π(θ) over the parameter space θ of the objective function R(θ). The search distribution π(θ) is implemented as a multivariate Gaussian distribution, i.e., π(θ) = N (θ|µ, Σ). In each iteration, the search distribution π(θ) is used to create samples θ[k] of the parameter vector θ. Subsequently, the (possibly noisy) evaluation R[k] of θ[k] is obtained by querying the objective function. The samples {θ[k], R[k]}k=1...N are subsequently used to compute a new search distribution. This process will run iteratively until the algorithm converges to a solution. 1.2 Related Work Recent information-theoretic (IT) policy search algorithms [9] are based on the relative entropy policy search (REPS) algorithm which was proposed in [10] as a step-based policy search algorithm. However, in [9] an episode-based version of REPS that is equivalent to stochastic search was presented. The key idea behind episode-based REPS is to control the exploration-exploitation trade-off by bounding the relative entropy between the old ‘data’ distribution q(θ) and the newly estimated search distribution π(θ) by a factor ϵ. Due to the relative entropy bound, the algorithm achieves a smooth and stable learning process. However, the episodic REPS algorithm uses a sample based approximation of the KL-bound, which needs a lot of samples in order to be accurate. Moreover, a typical problem of REPS is that the entropy of the search distribution decreases too quickly, resulting in premature convergence. Taylor approximations of the KL-divergence have also been used very successfully in the area of stochastic search, resulting in natural evolutionary strategies (NES). NES uses the natural gradient to optimize the objective [2]. The natural gradient has been shown to outperform the standard gradient in many applications in machine learning [12]. The intuition of the natural gradient is that we want to obtain an update direction of the parameters of the search distribution that is most similar to the standard gradient while the KL-divergence between new and old search distributions is bounded. To obtain this update direction, a second order approximation of the KL, which is equivalent to the Fisher information matrix, is used. 2 Surrogate based stochastic search algorithms [6][13] have been shown to be more sample efficient than direct stochastic search methods and can also smooth out the noise of the objective function. For example, an individual optimization method is used on the surrogate that is stopped whenever the KL-divergence between the new and the old distribution exceeds a certain bound [6]. For the first time, our algorithm uses the surrogate model to compute the new search distribution analytically, which bounds the KL divergence of the new and old search distribution, in closed form. Quadratic models have been used successfully in trust region methods for local surrogate approximation [14, 15]. These methods do not maintain a stochastic search distribution but a point estimate and a trust region around this point. They update the point estimate by optimizing the surrogate and staying in the trusted region. Subsequently, heuristics are used to increase or decrease the trusted region. In the MORE algorithm, the trusted region is defined implicitly by the KL-bound. The Covariance Matrix Adaptation-Evolutionary Strategy (CMA-ES) is considered as the state of the art in stochastic search optimization. CMA-ES also maintains a Gaussian distribution over the problem parameter vector and uses well-defined heuristics to update the search distribution. 2 Model-Based Relative Entropy Stochastic Search Similar to information theoretic policy search algorithms [9], we want to control the explorationexploitation trade-off by bounding the relative entropy of two subsequent search distribution. However, by bounding the KL, the algorithm can adapt the mean and the variance of the algorithm. In order to maximize the objective for the immediate iteration, the shrinkage in the variance typically dominates the contribution to the KL-divergence, which often leads to a premature convergence of these algorithms. Hence, in addition to control the KL-divergence of the update, we also need to control the shrinkage of the covariance matrix. Such a control mechanism can be implemented by lower-bounding the entropy of the new distribution. In this paper, we will set the bound always to a certain percentage of the entropy of the old search distribution such that MORE converges asymptotically to a point estimate. 2.1 The MORE framework Similar as in [9], we can formulate an optimization problem to obtain a new search distribution that maximizes the expected objective value while upper-bounding the KL-divergence and lowerbounding the entropy of the distribution max π Z π(θ)Rθdθ, s.t. KL π(θ)||q(θ) ≤ϵ, H(π) ≥β, 1 = Z π(θ)dθ, (1) where Rθ denotes the expected objective1 when evaluating parameter vector θ. The term H(π) = − R π(θ) log π(θ)dθ denotes the entropy of the distribution π and q(θ) is the old distribution. The parameters ϵ and β are user-specified parameters to control the exploration-exploitation trade-off of the algorithm. We can obtain a closed form solution for π(θ) by optimizing the Lagrangian for the optimization problem given above. This solution is given as π(θ) ∝q(θ)η/(η+ω) exp Rθ η + ω , (2) where η and ω are the Lagrangian multipliers. As we can see, the new distribution is now a geometric average between the old sampling distribution q(θ) and the exponential transformation of the objective function. Note that, by setting ω = 0, we obtain the standard episodic REPS formulation [9]. The optimal value for η and ω can be obtained by minimizing the dual function g(η, ω) such that η > 0 and ω > 0, see [16]. The dual function g(η, ω) is given by g(η, ω) = ηϵ −ωβ + (η + ω) log Z q(θ) η η+ω exp Rθ η + ω dθ . (3) 1Note that we are typically not able to obtain the expected reward but only a noisy estimate of the underlying reward distribution. 3 As we are dealing with continuous distributions, the entropy can also be negative. We specify β such that the relative difference of H(π) to a minimum exploration policy H(π0) is decreased for a certain percentage, i.e., we change the entropy constraint to H(π) −H(π0) ≥γ(H(q) −H(π0)) ⇒β = γ(H(q) −H(π0)) + H(π0). Throughout all our experiments, we use the same γ value of 0.99 and we set minimum entropy H(π0) of search distribution to a small enough value like −75. We will show that using the additional entropy bound considerably alleviates the premature convergence problem. 2.2 Analytic Solution of the Dual-Function and the Policy Using a quadratic surrogate model of the objective function, we can compute the integrals in the dual function analytically, and, hence, we can satisfy the introduced bounds exactly in the MORE framework. At the same time, we take advantage of surrogate models such as a smoothed estimate in the case of noisy objective functions and a decrease in the sample complexity2. We will for now assume that we are given a quadratic surrogate model Rθ ≈θT Rθ + θT r + r0 of the objective function Rθ which we will learn from data in Section 3. Moreover, the search distribution is Gaussian, i.e., q(θ) = N(θ|b, Q). In this case the integrals in the dual function given in Equation 3 can be solved in closed form. The integral inside the log-term in Equation (3) now represents an integral over an un-normalized Gaussian distribution. Hence, the integral evaluates to the inverse of the normalization factor of the corresponding Gaussian. After rearranging terms, the dual can be written as g(η, ω) = ηϵ −βω + 1 2 f T F f −ηbT Q−1b −η log |2πQ| + (η + ω) log |2π(η + ω)F | (4) with F = (ηQ−1 −2R)−1 and f = ηQ−1b+r. Hence, the dual function g(η, ω) can be efficiently evaluated by matrix inversions and matrix products. Note that, for a large enough value of η, the matrix F will be positive definite and hence invertible even if R is not. In our optimization, we always restrict the η values such that F stays positive definite3. Nevertheless, we could always find the η value with the correct KL-divergence. In contrast to MORE, Episodic REPS relies on a sample based approximation of the integrals in the dual function in Equation (3). It uses the sampled rewards Rθ of the parameters θ to approximate this integral. We can also obtain the update rule for the new policy π(θ). From Equation (2), we know that the new policy is the geometric average of the Gaussian sampling distribution q(θ) and a squared exponential given by the exponentially transformed surrogate. After re-arranging terms and completing the square, the new policy can be written as π(θ) = N (θ|F f, F (η + ω)) , (5) where F , f are given in the previous section. 3 Learning Approximate Quadratic Models In this section, we show how to learn a quadratic surrogate. Note that we use the quadratic surrogate in each iteration to locally approximate the objective function and not globally. As the search distribution will shrink in each iteration, the model error will also vanish asymptotically. A quadratic surrogate is also a natural choice if a Gaussian distribution is used, cause the exponent of the Gaussian is also quadratic in the parameters. Hence, even using a more complex surrogate, it can not be exploited by a Gaussian distribution. A local quadratic surrogate model provides similar secondorder information as the Hessian in standard gradient updates. However, a quadratic surrogate model also has quadratically many parameters which we have to estimate from a (ideally) very small data 2The regression performed for learning the quadratic surrogate model estimates the expectation of the objective function from the observed samples. 3To optimize g, any constrained nonlinear optimization method can be used[13]. 4 Episodes × 1500 Average Return -10y 0 1 2 3 4 5 6 7 8 9 -4 -3 -2 -1 0 1 2 3 4 5 6 MORE REPS PoWER xNES CMA-ES (a) Rosenbrock Episodes Average Return -10y 0 0.5 1 1.5 2 2.5 3 4.00 3.50 3.00 2.50 2.00 1.50 1.00 0.50 × 10 4 MORE REPS PoWER xNES CMA-ES (b) Rastrigin Episodes Average Return × 10 4 -10y 0 1 2 3 4 5 6 7 6.00 5.00 4.00 3.00 2.00 1.00 -0.00 -1.00 -2.00 MORE REPS PoWER xNES CMA-ES (c) Noisy Function Figure 1: Comparison of stochastic search methods for optimizing the uni-modal Rosenbrock (a) and the multi modal (b) Rastrigin function. (c) Comparison for a noisy objective function. All results show that MORE clearly outperforms other methods. set. Therefore, already learning a simple local quadratic surrogate is a challenging task. In order to learn the local quadratic surrogate, we can use linear regression to fit a function of the form f(θ) = φ(θ)β, where φ(θ) is a feature function that returns a bias term, all linear and all quadratic terms of θ. Hence, the dimensionality of φ(θ) is D = 1 + d + d(d + 1)/2, where d is the dimensionality of the parameter space. To reduce the dimensionality of the regression problem, we project θ in a lower dimensional space lp×1 = W θ and solve the linear regression problem in this reduced space4. The quadratic form of the objective function can then be computed from β and W . Still, the question remains how to choose the projection matrix W . We did not achieve good performance with standard PCA [17] as PCA is unsupervised. Yet, the W matrix is typically quite high dimensional such that it is hard to obtain the matrix by supervised learning and simultaneously avoid over-fitting. Inspired by [18], where supervised Bayesian dimensionality reduction are used for classification, we also use a supervised Bayesian approach where we integrate out the projection matrix W . 3.1 Bayesian Dimensionality Reduction for Quadratic Functions In order to integrate out the parameters W , we use the following probabilistic dimensionality reduction model p(r∗|θ∗, D) = Z p(r∗|θ∗, W )p(W |D)dW , (6) where r∗is prediction of the objective at query point θ∗, D is the training data set consisting of parameters θ[k] and their objective evaluations R[k]. The posterior for W is given by Bayes rule, i.e., p(W |D) = p(D|W )p(W )/p(D). The likelihood function p(D|W ) is given by p(D|W ) = Z p(D|W , β)p(β)dβ, (7) where p(D|W , β) is the likelihood of the linear model β and p(β) its prior. For the likelihood of the linear model we use a multiplicative noise model, i.e., the higher the absolute value of the objective, the higher the variance. The intuition behind this choice is that we are mainly interested in minimizing the relative error instead of the absolute error5. Our likelihood and prior is therefore given by p(D|W , β) = N Y k=1 N(R[k]|φ(W θ[k])β, σ2|R[k]|), p(β) = N(β|0, τ 2I), (8) 4W (p×d) is a projection matrix that projects a vector from a d dimension manifold to a p dimension manifold. 5We observed empirically that such relative error performs better if we have non-smooth objective functions with a large difference in the objective values. For example, an error of 10 has a huge influence for an objective value of −1, while for a value of −10000, such an error is negligible. 5 Equation 7 is a weighted Bayesian linear regression model in β where the weight of each sample is scaled by the absolute value of |R[k]|−1. Therefore, p(D|W ) can be obtained efficiently in closed form. However, due to the feature transformation, the output R[k] depends non-linearly on the projection W . Therefore, the posterior p(W |D) cannot be obtained in closed form any more. We use a simple sample-based approach in order to approximate the posterior p(W |D). We use K samples from the prior p(W ) to approximate the integrals in Equation (6) and in p(D). In this case, the predictive model is given by p(r∗|θ∗, D) ≈1 K X i p(r∗|θ∗, W i)p(D|W i) p(D) , (9) where p(D) ≈1/K P i p(D|W i). The prediction for a single W i can again be obtained by a standard Bayesian linear regression. Our algorithm is only interested in the expectation Rθ = E[r|θ] in the form of a quadratic model. Given a certain W i, we can obtain a single quadratic model from φ(W iθ)µβ, where µβ is the mean of the posterior distribution p(β|W , D) obtained by Bayesian linear regression. The expected quadratic model is then obtained by a weighted average over all K quadratic models with weight p(D|W i)/p(D). Note that with a higher number of projection matrix samples(K), the better the posterior can be approximated. Generating these samples is typically inexpensive as it just requires computation time but no evaluation of the objective function. We also investigated using more sophisticated sampling techniques such as elliptical slice sampling [19] which achieved a similar performance but considerably increased computation time. Further optimization of the sampling technique is part of future work. 4 Experiments We compare MORE with state of the art methods in stochastic search and policy search such as CMA-ES [1], NES [2], PoWER [20] and episodic REPS [9]. In our first experiments, we use standard optimization test functions [21], such as the the Rosenbrock (uni modal) and the Rastrigin (multi modal) functions. We use a 15 dimensional version of these functions. Furthermore, we use a 5-link planar robot that has to reach a given point in task space as a toy task for the comparisons. The resulting policy has 25 parameters, but we also test the algorithms in highdimensional parameter spaces by scaling the robot up to 30 links (150 parameters). We subsequently made the task more difficult by introducing hard obstacles, which results in a discontinuous objective function. We denote this task hole-reaching task. Finally, we evaluate our algorithm on a physical simulation of a robot playing beer pong. The used parameters of the algorithms and a detailed evaluation of the parameters of MORE can be found in the supplement. 4.1 Standard Optimization Test Functions We chose one uni-modal functions f(x) = Pn−1 i=1 [100(xi+1 −x2 i )2 + (1 −xi)2], also known as Rosenbrock function and a multi-modal function which is known as the Rastgirin function f(x) = 10n + Pn i=1[x2 i −10 cos(2πxi)]. All these functions have a global minimum equal f(x) = 0. In our experiments, the mean of the initial distributions has been chosen randomly. Algorithmic Comparison. We compared our algorithm against CMA-ES, NES, PoWER and REPS. In each iteration, we generated 15 new samples 6. For MORE, REPS and PoWER, we always keep the last L = 150 samples, while for NES and CMA-ES only the 15 current samples are kept7. As we can see in the Figure 1, MORE outperforms all the other methods in terms of learning speed and final performance in all test functions. However, in terms of the computation time, MORE was 5 times slower than the other algorithms. Yet, MORE was sufficiently fast as one policy update took less than 1s. Performance on a Noisy Function. We also conducted an experiment on optimizing the Sphere function where we add multiplicative noise to the reward samples, i.e., y = f(x) + ϵ|f(x)|, where ϵ ∼N(0, 1.0) and f(x) = xMx with a randomly chosen M matrix. 6We use the heuristics introduced in [1, 2] for CMA-ES and NES 7NES and CMA-ES algorithms typically only use the new samples and discard the old samples. We also tried keeping old samples or getting more new samples which decreased the performance considerably. 6 Episodes Average Return -10y 0 0.5 1 1.5 2 2.5 3 3.5 4 6.00 5.50 5.00 4.50 4.00 3.50 3.00 × 10 4 REPS MORE xNES CMA-ES (a) Reaching Task Episodes Average Return -10y 0 0.5 1 1.5 2 2.5 3 3.5 4 7.50 7.00 6.50 6.00 5.50 5.00 4.50 4.00 3.50 × 10 4 REPS MORE xNES CMA-ES (b) High-D Reaching Task Episodes Average Return -10y 0 0.5 1 1.5 2 2.5 3 3.5 4 5.50 5.00 4.50 4.00 3.50 3.00 × 10 4 Gamma = 0.99 Gamma = 0.96 Gamma = 0.99999 (c) Evaluation of γ Figure 2: (a) Algorithmic comparison for a planar task (5 joints, 25 parameters). MORE outperforms all the other methods considerably.(b) Algorithmic comparison for a high-dimensional task (30 joints, 150 parameters). The performance of NES degraded while MORE could still outperform CMA-ES. (c) Evaluation of the entropy bound γ. For a low γ, the entropy bound is not active and the algorithm converges prematurely. If γ is close to one, the entropy is reduced too slowly and convergence takes long. Figure 1(c) shows that MORE successfully smooths out the noise and converges, while other methods diverge. The result shows that MORE can learn highly noisy reward functions. 4.2 Planar Reaching and Hole Reaching We used a 5-link planar robot with DMPs [22] as the underlying control policy. Each link had a length of 1m. The robot is modeled as a decoupled linear dynamical system. The end-effector of the robot has to reach a via-point v50 = [1, 1] at time step 50 and at the final time step T = 100 the point v100 = [5, 0] with its end effector. The reward was given by a quadratic cost term for the two via-points as well as quadratic costs for high accelerations. Note that this objective function is highly non-quadratic in the parameters as the via-points are defined in end effector space. We used 5 basis functions per degree of freedom for the DMPs while the goal attractor for reaching the final state was assumed to be known. Hence, our parameter vector had 25 dimensions. The setup, including the learned policy is shown in the supplement. Algorithmic Comparison. We generated 40 new samples. For MORE, REPS, we always keep the last L = 200 samples, while for NES and CMA-ES only the 40 current samples are kept. We empirically optimized the open parameters of the algorithms by manually testing 50 parameter sets for each algorithm. The results shown in Figure 2(a) clearly show that MORE outperforms all other methods in terms of speed and the final performance. Entropy Bound. We also evaluated the entropy bound in Figure 2(c). We can see that the entropy constraint is a crucial component of the algorithm to avoid the premature convergence. High-Dimensional Parameter Spaces. We also evaluated the same task with a 30-link planar robot, resulting in a 150 dimensional parameter space. We compared MORE, CMA, REPS and NES. While NES considerably degraded in performance, CMA and MORE performed well, where MORE found considerably better policies (average reward of -6571 versus -15460 of CMA-ES), see Figure 2(b). The setup with the learned policy from MORE is depicted in the supplement. We use the same robot setup as in the planar reaching task for hole reaching task. For completing the hole reaching task, the robot’s end effector has to reach the bottom of a hole (35cm wide and 1 m deep) centering at [2, 0] without any collision with the ground or the walls, see Figure 3(c). The reward was given by a quadratic cost term for the desired final point, quadratic costs for high accelerations and additional punishment for collisions with the walls. Note that this objective function is discontinuous due to the costs for collisions. The goal attractor of the DMP for reaching the final state in this task is unknown and is also learned. Hence, our parameter vector had 30 dimensions. Algorithmic Comparison. We used the same learning parameters as for the planar reaching task. The results shown in Figure 3(a) show that MORE clearly outperforms all other methods. In this task, NES could not find any reasonable solution while Power, REPS and CMA-ES could only learn sub-optimal solutions. MORE could also achieve the same learning speed as REPS and CMA-ES, but would then also converge to a sub-optimal solution. 7 Episodes Average Return -10y 0 0.5 1 1.5 2 2.5 3 3.5 4 7.50 7.00 6.50 6.00 5.50 5.00 4.50 4.00 × 10 4 REPS PoWER MORE xNES CMA-ES (a) Hole Reaching Task Episodes Average Return -10y 0 50 100 150 1.50 1.00 0.50 -0.00 -0.50 -1.00 -1.50 REPS PoWER MORE CMA-ES (b) Beer Pong Task x-axis [m] y-axis [m] -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3 -1 0 1 2 3 4 5 (c) Hole Reaching Task Posture Figure 3: (a) Algorithmic comparison for the hole reaching task. MORE could find policies of much higher quality. (b) Algorithmic comparison for the beer pong task. Only MORE could reliably learn high-quality policies while for the other methods, even if some trials found good solutions, other trials got stuck prematurely. 4.3 Beer Pong (a) Beer Pong Task Figure 4: The Beer Pong Task. The robot has to throw a ball such that it bounces of the table and ends up in the cup. In this task, a seven DoF simulated barrett WaM robot arm had to play beer-pong, i.e., it had to throw a ball such that it bounces once on the table and falls into a cup. The ball was placed in a container mounted on the end-effector. The ball could leave the container by a strong deceleration of the robot’s end-effector. We again used a DMP as underlying policy representation, where we used the shape parameters (five per DoF) and the goal attractor (one per DoF) as parameters. The mean of our search distribution was initialized with imitation learning. The cup was placed at a distance of 2.2m from the robot and it had a height of 7cm. As reward function, we computed the point of the ball trajectory after the bounce on the table, where the ball is passing the plane of the entry of the cup. The reward was set to be 20 times the negative squared distance of that point to the center of the cup while punishing the acceleration of the joints. We evaluated MORE, CMA, PoWER and REPS on this task. The setup is shown in Figure 4 and the learning curve is shown in Figure 3(b). MORE was able to accurately hit the ball into the cup while the other algorithms couldn’t find a robust policy. 5 Conclusion Using KL-bounds to limit the update of the search distribution is a wide-spread idea in the stochastic search community but typically requires approximations. In this paper, we presented a new model-based stochastic search algorithm that computes the KL-bound analytically. By relying on a Gaussian search distribution and on locally learned quadratic models of the objective function, we can obtain a closed form of the information theoretic policy update. We also introduced an additional entropy term in the formulation that is needed to avoid premature shrinkage of the variance of the search distribution. Our algorithm considerably outperforms competing methods in all the considered scenarios. The main disadvantage of MORE is the number of parameters. However based on our experiments, these parameters are not problem specific. Acknowledgment This project has received funding from the European Unions Horizon 2020 research and innovation programme under grant agreement No #645582 (RoMaNS) and the first author is supported by FCT under grant SFRH/BD/81155/2011. 8 References [1] N. Hansen, S.D. Muller, and P. Koumoutsakos. Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES). Evolutionary Computation, 2003. [2] Y. Sun, D. Wierstra, T. Schaul, and J. Schmidhuber. Efficient Natural Evolution Strategies. In Proceedings of the 11th Annual conference on Genetic and evolutionary computation(GECCO), 2009. [3] F. Stulp and O. Sigaud. Path Integral Policy Improvement with Covariance Matrix Adaptation. In International Conference on Machine Learning (ICML), 2012. [4] T. R¨uckstieß, M. Felder, and J. Schmidhuber. State-dependent Exploration for Policy Gradient Methods. In Proceedings of the European Conference on Machine Learning (ECML), 2008. [5] T. Furmston and D. Barber. A Unifying Perspective of Parametric Policy Search Methods for Markov Decision Processes. In Neural Information Processing Systems (NIPS), 2012. [6] I. Loshchilov, M. Schoenauer, and M. Sebag. Intensive Surrogate Model Exploitation in SelfAdaptive Surrogate-Assisted CMA-ES (SAACM-ES). In GECCO, 2013. [7] S. Mannor, R. Rubinstein, and Y. Gat. The Cross Entropy method for Fast Policy Search. In Proceedings of the 20th International Conference on Machine Learning(ICML), 2003. [8] E. Theodorou, J. Buchli, and S. Schaal. A Generalized Path Integral Control Approach to Reinforcement Learning. The Journal of Machine Learning Research, 2010. [9] A. Kupcsik, M. P. Deisenroth, J. Peters, and G. Neumann. Data-Efficient Contextual Policy Search for Robot Movement Skills. In Proceedings of the National Conference on Artificial Intelligence (AAAI), 2013. [10] J. Peters, K. M¨ulling, and Y. Altun. Relative Entropy Policy Search. In Proceedings of the 24th National Conference on Artificial Intelligence (AAAI). AAAI Press, 2010. [11] D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, and J. Schmidhuber. Natural Evolution Strategies. Journal of Machine Learning Research, 2014. [12] S. Amari. Natural Gradient Works Efficiently in Learning. Neural Computation, 1998. [13] I. Loshchilov, M. Schoenauer, and M. Sebag. Kl-based control of the learning schedule for surrogate black-box optimization. CoRR, 2013. [14] M.J.D. Powell. The NEWUOA Software for Unconstrained Optimization without Derivatives. In Report DAMTP 2004/NA05, University of Cambridge, 2004. [15] M.J.D. Powell. The BOBYQA Algorithm for Bound Constrained Optimization Without Derivatives. In Report DAMTP 2009/NA06, University of Cambridge, 2009. [16] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2004. [17] I.T. Jolliffe. Principal Component Analysis. Springer Verlag, 1986. [18] G. Mehmet. Bayesian Supervised Dimensionality Reduction. IEEE T. Cybernetics, 2013. [19] I. Murray, R.P. Adams, and D.J.C. MacKay. Elliptical Slice Sampling. JMLR: W&CP, 9, 2010. [20] J. Kober and J. Peters. Policy Search for Motor Primitives in Robotics. Machine Learning, pages 1–33, 2010. [21] M. Molga and C. Smutnicki. Test Functions for Optimization Needs. In http://www.zsd.ict.pwr.wroc.pl/files/docs/functions.pdf, 2005. [22] A. Ijspeert and S. Schaal. Learning Attractor Landscapes for Learning Motor Primitives. In Advances in Neural Information Processing Systems 15(NIPS). 2003. 9 | 2015 | 95 |
5,989 | Semi-Supervised Learning with Ladder Networks Antti Rasmus and Harri Valpola The Curious AI Company, Finland Mikko Honkala Nokia Labs, Finland Mathias Berglund and Tapani Raiko Aalto University, Finland & The Curious AI Company, Finland Abstract We combine supervised learning with unsupervised learning in deep neural networks. The proposed model is trained to simultaneously minimize the sum of supervised and unsupervised cost functions by backpropagation, avoiding the need for layer-wise pre-training. Our work builds on top of the Ladder network proposed by Valpola [1] which we extend by combining the model with supervision. We show that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutationinvariant MNIST classification with all labels. 1 Introduction In this paper, we introduce an unsupervised learning method that fits well with supervised learning. Combining an auxiliary task to help train a neural network was proposed by Suddarth and Kergosien [2]. There are multiple choices for the unsupervised task, for example reconstruction of the inputs at every level of the model [e.g., 3] or classification of each input sample into its own class [4]. Although some methods have been able to simultaneously apply both supervised and unsupervised learning [3, 5], often these unsupervised auxiliary tasks are only applied as pre-training, followed by normal supervised learning [e.g., 6]. In complex tasks there is often much more structure in the inputs than can be represented, and unsupervised learning cannot, by definition, know what will be useful for the task at hand. Consider, for instance, the autoencoder approach applied to natural images: an auxiliary decoder network tries to reconstruct the original input from the internal representation. The autoencoder will try to preserve all the details needed for reconstructing the image at pixel level, even though classification is typically invariant to all kinds of transformations which do not preserve pixel values. Our approach follows Valpola [1] who proposed a Ladder network where the auxiliary task is to denoise representations at every level of the model. The model structure is an autoencoder with skip connections from the encoder to decoder and the learning task is similar to that in denoising autoencoders but applied at every layer, not just the inputs. The skip connections relieve the pressure to represent details at the higher layers of the model because, through the skip connections, the decoder can recover any details discarded by the encoder. Previously the Ladder network has only been demonstrated in unsupervised learning [1, 7] but we now combine it with supervised learning. The key aspects of the approach are as follows: Compatibility with supervised methods. The unsupervised part focuses on relevant details found by supervised learning. Furthermore, it can be added to existing feedforward neural networks, for example multi-layer perceptrons (MLPs) or convolutional neural networks (CNNs). 1 Scalability due to local learning. In addition to supervised learning target at the top layer, the model has local unsupervised learning targets on every layer making it suitable for very deep neural networks. We demonstrate this with two deep supervised network architectures. Computational efficiency. The encoder part of the model corresponds to normal supervised learning. Adding a decoder, as proposed in this paper, approximately triples the computation during training but not necessarily the training time since the same result can be achieved faster due to better utilization of available information. Overall, computation per update scales similarly to whichever supervised learning approach is used, with a small multiplicative factor. As explained in Section 2, the skip connections and layer-wise unsupervised targets effectively turn autoencoders into hierarchical latent variable models which are known to be well suited for semisupervised learning. Indeed, we obtain state-of-the-art results in semi-supervised learning in the MNIST, permutation invariant MNIST and CIFAR-10 classification tasks (Section 4). However, the improvements are not limited to semi-supervised settings: for the permutation invariant MNIST task, we also achieve a new record with the normal full-labeled setting.For a longer version of this paper with more complete descriptions, please see [8]. 2 Derivation and justification Latent variable models are an attractive approach to semi-supervised learning because they can combine supervised and unsupervised learning in a principled way. The only difference is whether the class labels are observed or not. This approach was taken, for instance, by Goodfellow et al. [5] with their multi-prediction deep Boltzmann machine. A particularly attractive property of hierarchical latent variable models is that they can, in general, leave the details for the lower levels to represent, allowing higher levels to focus on more invariant, abstract features that turn out to be relevant for the task at hand. The training process of latent variable models can typically be split into inference and learning, that is, finding the posterior probability of the unobserved latent variables and then updating the underlying probability model to better fit the observations. For instance, in the expectation-maximization (EM) algorithm, the E-step corresponds to finding the expectation of the latent variables over the posterior distribution assuming the model fixed and M-step then maximizes the underlying probability model assuming the expectation fixed. The main problem with latent variable models is how to make inference and learning efficient. Suppose there are layers l of latent variables z(l). Latent variable models often represent the probability distribution of all the variables explicitly as a product of terms, such as p(z(l) | z(l+1)) in directed graphical models. The inference process and model updates are then derived from Bayes’ rule, typically as some kind of approximation. Often the inference is iterative as it is generally impossible to solve the resulting equations in a closed form as a function of the observed variables. There is a close connection between denoising and probabilistic modeling. On the one hand, given a probabilistic model, you can compute the optimal denoising. Say you want to reconstruct a latent z using a prior p(z) and an observation ˜z = z + noise. We first compute the posterior distribution p(z | ˜z), and use its center of gravity as the reconstruction ˆz. One can show that this minimizes the expected denoising cost (ˆz −z)2. On the other hand, given a denoising function, one can draw samples from the corresponding distribution by creating a Markov chain that alternates between corruption and denoising [9]. Valpola [1] proposed the Ladder network where the inference process itself can be learned by using the principle of denoising which has been used in supervised learning [10], denoising autoencoders (dAE) [11] and denoising source separation (DSS) [12] for complementary tasks. In dAE, an autoencoder is trained to reconstruct the original observation x from a corrupted version ˜x. Learning is based simply on minimizing the norm of the difference of the original x and its reconstruction ˆx from the corrupted ˜x, that is the cost is kˆx −xk2. While dAEs are normally only trained to denoise the observations, the DSS framework is based on the idea of using denoising functions ˆz = g(z) of latent variables z to train a mapping z = f(x) which models the likelihood of the latent variables as a function of the observations. The cost function is identical to that used in a dAE except that latent variables z replace the observations x, 2 0 0 1 2 3 -1 -1 1 2 3 -2 4 -2 Corrupted Clean y ˜y g(1)(·, ·) g(0)(·, ·) f (1)(·) f (1)(·) f (2)(·) f (2)(·) N(0, σ2) N(0, σ2) N(0, σ2) C(2) d C(1) d C(0) d ˜z(1) ˜z(2) ˆz(2) ˆz(1) z(1) z(2) ˜x ˆx x x x g(2)(·, ·) Figure 1: Left: A depiction of an optimal denoising function for a bimodal distribution. The input for the function is the corrupted value (x axis) and the target is the clean value (y axis). The denoising function moves values towards higher probabilities as show by the green arrows. Right: A conceptual illustration of the Ladder network when L = 2. The feedforward path (x ! z(1) ! z(2) ! y) shares the mappings f (l) with the corrupted feedforward path, or encoder (x ! ˜z(1) ! ˜z(2) ! ˜y). The decoder (˜z(l) ! ˆz(l) ! ˆx) consists of denoising functions g(l) and has cost functions C(l) d on each layer trying to minimize the difference between ˆz(l) and z(l). The output ˜y of the encoder can also be trained to match available labels t(n). that is, the cost is kˆz−zk2. The only thing to keep in mind is that z needs to be normalized somehow as otherwise the model has a trivial solution at z = ˆz = constant. In a dAE, this cannot happen as the model cannot change the input x. Figure 1 (left) depicts the optimal denoising function ˆz = g(˜z) for a one-dimensional bimodal distribution which could be the distribution of a latent variable inside a larger model. The shape of the denoising function depends on the distribution of z and the properties of the corruption noise. With no noise at all, the optimal denoising function would be the identity function. In general, the denoising function pushes the values towards higher probabilities as shown by the green arrows. Figure 1 (right) shows the structure of the Ladder network. Every layer contributes to the cost function a term C(l) d = kz(l) −ˆz(l)k2 which trains the layers above (both encoder and decoder) to learn the denoising function ˆz(l) = g(l)(˜z(l), ˆz(l+1)) which maps the corrupted ˜z(l) onto the denoised estimate ˆz(l). As the estimate ˆz(l) incorporates all prior knowledge about z, the same cost function term also trains the encoder layers below to find cleaner features which better match the prior expectation. Since the cost function needs both the clean z(l) and corrupted ˜z(l), during training the encoder is run twice: a clean pass for z(l) and a corrupted pass for ˜z(l). Another feature which differentiates the Ladder network from regular dAEs is that each layer has a skip connection between the encoder and decoder. This feature mimics the inference structure of latent variable models and makes it possible for the higher levels of the network to leave some of the details for lower levels to represent. Rasmus et al. [7] showed that such skip connections allow dAEs to focus on abstract invariant features on the higher levels, making the Ladder network a good fit with supervised learning that can select which information is relevant for the task at hand. One way to picture the Ladder network is to consider it as a collection of nested denoising autoencoders which share parts of the denoising machinery between each other. From the viewpoint of the autoencoder at layer l, the representations on the higher layers can be treated as hidden neurons. In other words, there is no particular reason why ˆz(l+i) produced by the decoder should resemble the corresponding representations z(l+i) produced by the encoder. It is only the cost function C(l+i) d that ties these together and forces the inference to proceed in a reverse order in the decoder. This sharing helps a deep denoising autoencoder to learn the denoising process as it splits the task into meaningful sub-tasks of denoising intermediate representations. 3 Algorithm 1 Calculation of the output y and cost function C of the Ladder network Require: x(n) # Corrupted encoder and classifier ˜h(0) ˜z(0) x(n) + noise for l = 1 to L do ˜z(l) batchnorm(W(l)˜h(l−1)) + noise ˜h(l) activation(γ(l) ⊙(˜z(l) + β(l))) end for P(˜y | x) ˜h(L) # Clean encoder (for denoising targets) h(0) z(0) x(n) for l = 1 to L do z(l) pre W(l)h(l−1) µ(l) batchmean(z(l) pre) σ(l) batchstd(z(l) pre) z(l) batchnorm(z(l) pre) h(l) activation(γ(l) ⊙(z(l) + β(l))) end for # Final classification: P(y | x) h(L) # Decoder and denoising for l = L to 0 do if l = L then u(L) batchnorm(˜h(L)) else u(l) batchnorm(V(l+1)ˆz(l+1)) end if 8i : ˆz(l) i g(˜z(l) i , u(l) i ) 8i : ˆz(l) i,BN ˆz(l) i −µ(l) i σ(l) i end for # Cost function C for training: C 0 if t(n) then C −log P(˜y = t(n) | x(n)) end if C C + PL l=0 λl """z(l) −ˆz(l) BN """ 2 3 Implementation of the Model We implement the Ladder network for fully connected MLP networks and for convolutional networks. We used standard rectifier networks with batch normalization applied to each preactivation. The feedforward pass of the full Ladder network is listed in Algorithm 1. In the decoder, we parametrize the denoising function such that it supports denoising of conditionally independent Gaussian latent variables, conditioned on the activations ˆz(l+1) of the layer above. The denoising function g is therefore coupled into components ˆz(l) i = gi(˜z(l) i , u(l) i ) = ⇣ ˜z(l) i −µi(u(l) i ) ⌘ υi(u(l) i ) + µi(u(l) i ) where u(l) i propagates information from ˆz(l+1) by u(l) = batchnorm(V(l+1)ˆz(l+1)) . The functions µi(u(l) i ) and υi(u(l) i ) are modeled as expressive nonlinearities: µi(u(l) i ) = a(l) 1,isigmoid(a(l) 2,iu(l) i + a(l) 3,i) + a(l) 4,iu(l) i + a(l) 5,i, with the form of the nonlinearity similar for υi(u(l) i ). The decoder has thus 10 unit-wise parameters a, compared to the two parameters (γ and β [13]) in the encoder. It is worth noting that a simple special case of the decoder is a model where λl = 0 when l < L. This corresponds to a denoising cost only on the top layer and means that most of the decoder can be omitted. This model, which we call the Γ-model due to the shape of the graph, is useful as it can easily be plugged into any feedforward network without decoder implementation. Further implementation details of the model can be found in the supplementary material or Ref. [8]. 4 Experiments We ran experiments both with the MNIST and CIFAR-10 datasets, where we attached the decoder both to fully-connected MLP networks and to convolutional neural networks. We also compared the performance of the simpler Γ-model (Sec. 3) to the full Ladder network. With convolutional networks, our focus was exclusively on semi-supervised learning. We make claims neither about the optimality nor the statistical significance of the supervised baseline results. We used the Adam optimization algorithm [14]. The initial learning rate was 0.002 and it was decreased linearly to zero during a final annealing phase. The minibatch size was 100. The source code for all the experiments is available at https://github.com/arasmus/ladder. 4 Table 1: A collection of previously reported MNIST test errors in the permutation invariant setting followed by the results with the Ladder network. * = SVM. Standard deviation in parenthesis. Test error % with # of used labels 100 1000 All Semi-sup. Embedding [15] 16.86 5.73 1.5 Transductive SVM [from 15] 16.81 5.38 1.40* MTC [16] 12.03 3.64 0.81 Pseudo-label [17] 10.49 3.46 AtlasRBF [18] 8.10 (± 0.95) 3.68 (± 0.12) 1.31 DGN [19] 3.33 (± 0.14) 2.40 (± 0.02) 0.96 DBM, Dropout [20] 0.79 Adversarial [21] 0.78 Virtual Adversarial [22] 2.12 1.32 0.64 (± 0.03) Baseline: MLP, BN, Gaussian noise 21.74 (± 1.77) 5.70 (± 0.20) 0.80 (± 0.03) Γ-model (Ladder with only top-level cost) 3.06 (± 1.44) 1.53 (± 0.10) 0.78 (± 0.03) Ladder, only bottom-level cost 1.09 (±0.32) 0.90 (± 0.05) 0.59 (± 0.03) Ladder, full 1.06 (± 0.37) 0.84 (± 0.08) 0.57 (± 0.02) 4.1 MNIST dataset For evaluating semi-supervised learning, we randomly split the 60 000 training samples into 10 000sample validation set and used M = 50 000 samples as the training set. From the training set, we randomly chose N = 100, 1000, or all labels for the supervised cost.1 All the samples were used for the decoder which does not need the labels. The validation set was used for evaluating the model structure and hyperparameters. We also balanced the classes to ensure that no particular class was over-represented. We repeated the training 10 times varying the random seed for the splits. After optimizing the hyperparameters, we performed the final test runs using all the M = 60 000 training samples with 10 different random initializations of the weight matrices and data splits. We trained all the models for 100 epochs followed by 50 epochs of annealing. 4.1.1 Fully-connected MLP A useful test for general learning algorithms is the permutation invariant MNIST classification task. We chose the layer sizes of the baseline model to be 784-1000-500-250-250-250-10. The hyperparameters we tuned for each model are the noise level that is added to the inputs and to each layer, and denoising cost multipliers λ(l). We also ran the supervised baseline model with various noise levels. For models with just one cost multiplier, we optimized them with a search grid {. . ., 0.1, 0.2, 0.5, 1, 2, 5, 10, . . .}. Ladder networks with a cost function on all layers have a much larger search space and we explored it much more sparsely. For the complete set of selected denoising cost multipliers and other hyperparameters, please refer to the code. The results presented in Table 1 show that the proposed method outperforms all the previously reported results. Encouraged by the good results, we also tested with N = 50 labels and got a test error of 1.62 % (± 0.65 %). The simple Γ-model also performed surprisingly well, particularly for N = 1000 labels. With N = 100 labels, all models sometimes failed to converge properly. With bottom level or full cost in Ladder, around 5 % of runs result in a test error of over 2 %. In order to be able to estimate the average test error reliably in the presence of such random outliers, we ran 40 instead of 10 test runs with random initializations. 1In all the experiments, we were careful not to optimize any parameters, hyperparameters, or model choices based on the results on the held-out test samples. As is customary, we used 10 000 labeled validation samples even for those settings where we only used 100 labeled samples for training. Obviously this is not something that could be done in a real case with just 100 labeled samples. However, MNIST classification is such an easy task even in the permutation invariant case that 100 labeled samples there correspond to a far greater number of labeled samples in many other datasets. 5 Table 2: CNN results for MNIST Test error without data augmentation % with # of used labels 100 all EmbedCNN [15] 7.75 SWWAE [24] 9.17 0.71 Baseline: Conv-Small, supervised only 6.43 (± 0.84) 0.36 Conv-FC 0.99 (± 0.15) Conv-Small, Γ-model 0.89 (± 0.50) 4.1.2 Convolutional networks We tested two convolutional networks for the general MNIST classification task and focused on the 100-label case. The first network was a straight-forward extension of the fully-connected network tested in the permutation invariant case. We turned the first fully connected layer into a convolution with 26-by-26 filters, resulting in a 3-by-3 spatial map of 1000 features. Each of the 9 spatial locations was processed independently by a network with the same structure as in the previous section, finally resulting in a 3-by-3 spatial map of 10 features. These were pooled with a global meanpooling layer. We used the same hyperparameters that were optimal for the permutation invariant task. In Table 2, this model is referred to as Conv-FC. With the second network, which was inspired by ConvPool-CNN-C from Springenberg et al. [23], we only tested the Γ-model. The exact architecture of this network is detailed in the supplementary material or Ref. [8]. It is referred to as Conv-Small since it is a smaller version of the network used for CIFAR-10 dataset. The results in Table 2 confirm that even the single convolution on the bottom level improves the results over the fully connected network. More convolutions improve the Γ-model significantly although the variance is still high. The Ladder network with denoising targets on every level converges much more reliably. Taken together, these results suggest that combining the generalization ability of convolutional networks2 and efficient unsupervised learning of the full Ladder network would have resulted in even better performance but this was left for future work. 4.2 Convolutional networks on CIFAR-10 The CIFAR-10 dataset consists of small 32-by-32 RGB images from 10 classes. There are 50 000 labeled samples for training and 10 000 for testing. We decided to test the simple Γ-model with the convolutional architecture ConvPool-CNN-C by Springenberg et al. [23]. The main differences to ConvPool-CNN-C are the use of Gaussian noise instead of dropout and the convolutional perchannel batch normalization following Ioffe and Szegedy [25]. For a more detailed description of the model, please refer to model Conv-Large in the supplementary material. The hyperparameters (noise level, denoising cost multipliers and number of epochs) for all models were optimized using M = 40 000 samples for training and the remaining 10 000 samples for validation. After the best hyperparameters were selected, the final model was trained with these settings on all the M = 50 000 samples. All experiments were run with with 4 different random initializations of the weight matrices and data splits. We applied global contrast normalization and whitening following Goodfellow et al. [26], but no data augmentation was used. The results are shown in Table 3. The supervised reference was obtained with a model closer to the original ConvPool-CNN-C in the sense that dropout rather than additive Gaussian noise was used for regularization.3 We spent some time in tuning the regularization of our fully supervised baseline model for N = 4 000 labels and indeed, its results exceed the previous state of the art. This tuning was important to make sure that the improvement offered by the denoising target of the Γ-model is 2In general, convolutional networks excel in the MNIST classification task. The performance of the fully supervised Conv-Small with all labels is in line with the literature and is provided as a rough reference only (only one run, no attempts to optimize, not available in the code package). 3Same caveats hold for this fully supervised reference result for all labels as with MNIST: only one run, no attempts to optimize, not available in the code package. 6 Table 3: Test results for CNN on CIFAR-10 dataset without data augmentation Test error % with # of used labels 4 000 All All-Convolutional ConvPool-CNN-C [23] 9.31 Spike-and-Slab Sparse Coding [27] 31.9 Baseline: Conv-Large, supervised only 23.33 (± 0.61) 9.27 Conv-Large, Γ-model 20.40 (± 0.47) not a sign of poorly regularized baseline model. Although the improvement is not as dramatic as with MNIST experiments, it came with a very simple addition to standard supervised training. 5 Related Work Early works in semi-supervised learning [28, 29] proposed an approach where inputs x are first assigned to clusters, and each cluster has its class label. Unlabeled data would affect the shapes and sizes of the clusters, and thus alter the classification result. Label propagation methods [30] estimate P(y | x), but adjust probabilistic labels q(y(n)) based on the assumption that nearest neighbors are likely to have the same label. Weston et al. [15] explored deep versions of label propagation. There is an interesting connection between our Γ-model and the contractive cost used by Rifai et al. [16]: a linear denoising function ˆz(L) i = ai˜z(L) i + bi, where ai and bi are parameters, turns the denoising cost into a stochastic estimate of the contractive cost. In other words, our Γ-model seems to combine clustering and label propagation with regularization by contractive cost. Recently Miyato et al. [22] achieved impressive results with a regularization method that is similar to the idea of contractive cost. They required the output of the network to change as little as possible close to the input samples. As this requires no labels, they were able to use unlabeled samples for regularization. The Multi-prediction deep Boltzmann machine (MP-DBM) [5] is a way to train a DBM with backpropagation through variational inference. The targets of the inference include both supervised targets (classification) and unsupervised targets (reconstruction of missing inputs) that are used in training simultaneously. The connections through the inference network are somewhat analogous to our lateral connections. Specifically, there are inference paths from observed inputs to reconstructed inputs that do not go all the way up to the highest layers. Compared to our approach, MP-DBM requires an iterative inference with some initialization for the hidden activations, whereas in our case, the inference is a simple single-pass feedforward procedure. Kingma et al. [19] proposed deep generative models for semi-supervised learning, based on variational autoencoders. Their models can be trained with the variational EM algorithm, stochastic gradient variational Bayes, or stochastic backpropagation. Compared with the Ladder network, an interesting point is that the variational autoencoder computes the posterior estimate of the latent variables with the encoder alone while the Ladder network uses the decoder too to compute an implicit posterior approximate (the encoder provides the likelihood part which gets combined with the prior). Zeiler et al. [31] train deep convolutional autoencoders in a manner comparable to ours. They define max-pooling operations in the encoder to feed the max function upwards to the next layer, while the argmax function is fed laterally to the decoder. The network is trained one layer at a time using a cost function that includes a pixel-level reconstruction error, and a regularization term to promote sparsity. Zhao et al. [24] use a similar structure and call it the stacked what-where autoencoder (SWWAE). Their network is trained simultaneously to minimize a combination of the supervised cost and reconstruction errors on each level, just like ours. 6 Discussion We showed how a simultaneous unsupervised learning task improves CNN and MLP networks reaching the state-of-the-art in various semi-supervised learning tasks. Particularly the performance 7 obtained with very small numbers of labels is much better than previous published results which shows that the method is capable of making good use of unsupervised learning. However, the same model also achieves state-of-the-art results and a significant improvement over the baseline model with full labels in permutation invariant MNIST classification which suggests that the unsupervised task does not disturb supervised learning. The proposed model is simple and easy to implement with many existing feedforward architectures, as the training is based on backpropagation from a simple cost function. It is quick to train and the convergence is fast, thanks to batch normalization. Not surprisingly, the largest improvements in performance were observed in models which have a large number of parameters relative to the number of available labeled samples. With CIFAR-10, we started with a model which was originally developed for a fully supervised task. This has the benefit of building on existing experience but it may well be that the best results will be obtained with models which have far more parameters than fully supervised approaches could handle. An obvious future line of research will therefore be to study what kind of encoders and decoders are best suited for the Ladder network. In this work, we made very little modifications to the encoders whose structure has been optimized for supervised learning and we designed the parametrization of the vertical mappings of the decoder to mirror the encoder: the flow of information is just reversed. There is nothing preventing the decoder to have a different structure than the encoder. An interesting future line of research will be the extension of the Ladder networks to the temporal domain. While there exist datasets with millions of labeled samples for still images, it is prohibitively costly to label thousands of hours of video streams. The Ladder networks can be scaled up easily and therefore offer an attractive approach for semi-supervised learning in such large-scale problems. Acknowledgements We have received comments and help from a number of colleagues who would all deserve to be mentioned but we wish to thank especially Yann LeCun, Diederik Kingma, Aaron Courville, Ian Goodfellow, Søren Sønderby, Jim Fan and Hugo Larochelle for their helpful comments and suggestions. The software for the simulations for this paper was based on Theano [32] and Blocks [33]. We also acknowledge the computational resources provided by the Aalto Science-IT project. The Academy of Finland has supported Tapani Raiko. References [1] Harri Valpola. From neural PCA to deep unsupervised learning. In Adv. in Independent Component Analysis and Learning Machines, pages 143–171. Elsevier, 2015. arXiv:1411.7783. [2] Steven C Suddarth and YL Kergosien. Rule-injection hints as a means of improving network performance and learning time. In Proceedings of the EURASIP Workshop 1990 on Neural Networks, pages 120–129. Springer, 1990. [3] Marc’ Aurelio Ranzato and Martin Szummer. Semi-supervised learning of compact document representations with deep networks. In Proc. of ICML 2008, pages 792–799. ACM, 2008. [4] Alexey Dosovitskiy, Jost Tobias Springenberg, Martin Riedmiller, and Thomas Brox. Discriminative unsupervised feature learning with convolutional neural networks. In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 766–774, 2014. [5] Ian Goodfellow, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Multi-prediction deep Boltzmann machines. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 548–556, 2013. [6] Geoffrey E Hinton and Ruslan R Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507, 2006. [7] Antti Rasmus, Tapani Raiko, and Harri Valpola. Denoising autoencoder with modulated lateral connections learns invariant representations of natural images. arXiv:1412.7210, 2015. [8] Antti Rasmus, Harri Valpola, Mikko Honkala, Mathias Berglund, and Tapani Raiko. Semi-supervised learning with ladder networks. arXiv preprint arXiv:1507.02672, 2015. [9] Yoshua Bengio, Li Yao, Guillaume Alain, and Pascal Vincent. Generalized denoising auto-encoders as generative models. In Advances in Neural Information Processing Systems 26 (NIPS 2013), pages 899– 907. 2013. 8 [10] Jocelyn Sietsma and Robert JF Dow. Creating artificial neural networks that generalize. Neural networks, 4(1):67–79, 1991. [11] Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Bengio, and Pierre-Antoine Manzagol. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. JMLR, 11:3371–3408, 2010. [12] Jaakko S¨arel¨a and Harri Valpola. Denoising source separation. JMLR, 6:233–272, 2005. [13] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International Conference on Machine Learning (ICML), pages 448–456, 2015. [14] Diederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In the International Conference on Learning Representations (ICLR 2015), San Diego, 2015. arXiv:1412.6980. [15] Jason Weston, Fr´ed´eric Ratle, Hossein Mobahi, and Ronan Collobert. Deep learning via semi-supervised embedding. In Neural Networks: Tricks of the Trade, pages 639–655. Springer, 2012. [16] Salah Rifai, Yann N Dauphin, Pascal Vincent, Yoshua Bengio, and Xavier Muller. The manifold tangent classifier. In Advances in Neural Information Processing Systems 24 (NIPS 2011), pages 2294–2302, 2011. [17] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML 2013, 2013. [18] Nikolaos Pitelis, Chris Russell, and Lourdes Agapito. Semi-supervised learning using an unsupervised atlas. In Machine Learning and Knowledge Discovery in Databases (ECML PKDD 2014), pages 565– 580. Springer, 2014. [19] Diederik P Kingma, Shakir Mohamed, Danilo Jimenez Rezende, and Max Welling. Semi-supervised learning with deep generative models. In Advances in Neural Information Processing Systems 27 (NIPS 2014), pages 3581–3589, 2014. [20] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A simple way to prevent neural networks from overfitting. JMLR, 15(1):1929–1958, 2014. [21] Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In the International Conference on Learning Representations (ICLR 2015), 2015. arXiv:1412.6572. [22] Takeru Miyato, Shin ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing by virtual adversarial examples. arXiv:1507.00677, 2015. [23] Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for simplicity: The all convolutional net. arxiv:1412.6806, 2014. [24] Junbo Zhao, Michael Mathieu, Ross Goroshin, and Yann Lecun. Stacked what-where auto-encoders. 2015. arXiv:1506.02351. [25] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167, 2015. [26] Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron Courville, and Yoshua Bengio. Maxout networks. In Proc. of ICML 2013, 2013. [27] Ian Goodfellow, Yoshua Bengio, and Aaron C Courville. Large-scale feature learning with spike-and-slab sparse coding. In Proc. of ICML 2012, pages 1439–1446, 2012. [28] G. McLachlan. Iterative reclassification procedure for constructing an asymptotically optimal rule of allocation in discriminant analysis. J. American Statistical Association, 70:365–369, 1975. [29] D. Titterington, A. Smith, and U. Makov. Statistical analysis of finite mixture distributions. In Wiley Series in Probability and Mathematical Statistics. Wiley, 1985. [30] Martin Szummer and Tommi Jaakkola. Partially labeled classification with Markov random walks. Advances in Neural Information Processing Systems 15 (NIPS 2002), 14:945–952, 2003. [31] Matthew D Zeiler, Graham W Taylor, and Rob Fergus. Adaptive deconvolutional networks for mid and high level feature learning. In ICCV 2011, pages 2018–2025. IEEE, 2011. [32] Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, and Yoshua Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012. [33] Bart van Merri¨enboer, Dzmitry Bahdanau, Vincent Dumoulin, Dmitriy Serdyuk, David Warde-Farley, Jan Chorowski, and Yoshua Bengio. Blocks and fuel: Frameworks for deep learning. CoRR, abs/1506.00619, 2015. URL http://arxiv.org/abs/1506.00619. 9 | 2015 | 96 |
5,990 | Empirical Localization of Homogeneous Divergences on Discrete Sample Spaces Takashi Takenouchi Department of Complex and Intelligent Systems Future University Hakodate 116-2 Kamedanakano, Hakodate, Hokkaido, 040-8655, Japan ttakashi@fun.ac.jp Takafumi Kanamori Department of Computer Science and Mathematical Informatics Nagoya University Furocho, Chikusaku, Nagoya 464-8601, Japan kanamori@is.nagoya-u.ac.jp Abstract In this paper, we propose a novel parameter estimator for probabilistic models on discrete space. The proposed estimator is derived from minimization of homogeneous divergence and can be constructed without calculation of the normalization constant, which is frequently infeasible for models in the discrete space. We investigate statistical properties of the proposed estimator such as consistency and asymptotic normality, and reveal a relationship with the information geometry. Some experiments show that the proposed estimator attains comparable performance to the maximum likelihood estimator with drastically lower computational cost. 1 Introduction Parameter estimation of probabilistic models on discrete space is a popular and important issue in the fields of machine learning and pattern recognition. For example, the Boltzmann machine (with hidden variables) [1] [2] [3] is a very popular probabilistic model to represent binary variables, and attracts increasing attention in the context of Deep learning [4]. A training of the Boltzmann machine, i.e., estimation of parameters is usually done by the maximum likelihood estimation (MLE). The MLE for the Boltzmann machine cannot be explicitly solved and the gradient-based optimization is frequently used. A difficulty of the gradient-based optimization is that the calculation of the gradient requires calculation of a normalization constant or a partition function in each step of the optimization and its computational cost is sometimes exponential order. The problem of computational cost is common to the other probabilistic models on discrete spaces and various kinds of approximation methods have been proposed to solve the difficulty. One approach tries to approximate the probabilistic model by a tractable model by the mean-field approximation, which considers a model assuming independence of variables [5]. Another approach such as the contrastive divergence [6] avoids the exponential time calculation by the Markov Chain Monte Carlo (MCMC) sampling. In the literature of parameters estimation of probabilistic model for continuous variables, [7] employs a score function which is a gradient of log-density with respect to the data vector rather than parameters. This approach makes it possible to estimate parameters without calculating the normalization term by focusing on the shape of the density function. [8] extended the method to discrete variables, which defines information of “neighbor” by contrasting probability with that of a flipped 1 variable. [9] proposed a generalized local scoring rules on discrete sample spaces and [10] proposed an approximated estimator with the Bregman divergence. In this paper, we propose a novel parameter estimator for models on discrete space, which does not require calculation of the normalization constant. The proposed estimator is defined by minimization of a risk function derived by an unnormalized model and the homogeneous divergence having a weak coincidence axiom. The derived risk function is convex for various kind of models including higher order Boltzmann machine. We investigate statistical properties of the proposed estimator such as the consistency and reveal a relationship between the proposed estimator and the α-divergence [11]. 2 Settings Let X be a d-dimensional vector of random variables in a discrete space X (typically {+1, −1}d) and a bracket ⟨f⟩be summation of a function f(x) on X, i.e., ⟨f⟩= ∑ x∈X f(x). Let M and P be a space of all non-negative finite measures on X and a subspace consisting of all probability measures on X, respectively. M = {f(x) |⟨f⟩< ∞, f(x) ≥0} , P = {f(x) |⟨f⟩= 1, f(x) ≥0} . In this paper, we focus on parameter estimation of a probabilistic model ¯qθ(x) on X, written as ¯qθ(x) = qθ(x) Zθ (1) where θ is an m-dimensional vector of parameters, qθ(x) is an unnormalized model in M and Zθ = ⟨qθ⟩is a normalization constant. A computation of the normalization constant Zθ sometimes requires calculation of exponential order and is sometimes difficult for models on the discrete space. Note that the unnormalized model qθ(x) is not normalized and ⟨qθ⟩= ∑ x∈X qθ(x) = 1 does not necessarily hold. Let ψθ(x) be a function on X and throughout the paper, we assume without loss of generality that the unnormalized model qθ(x) can be written as qθ(x) = exp(ψθ(x)). (2) Remark 1. By setting ψθ(x) as ψθ(x) −log Zθ, the normalized model (1) can be written as (2). Example 1. The Bernoulli distribution on X = {+1, −1} is a simplest example of the probabilistic model (1) with the function ψθ(x) = θx. Example 2. With a function ψθ,k(x) = (x1, . . . , xd, x1x2, . . . , xd−1xd, x1x2x3, . . .)θ, we can define a k-th order Boltzmann machine [1, 12]. Example 3. Let xo ∈{+1, −1}d1 and xh ∈{+1, −1}d2 be an observed vector and hidden vector, respectively, and x = ( xT o , xT h ) ∈{+1, −1}d1+d2 where T indicates the transpose, be a concatenated vector. A function ψh,θ(xo) for the Boltzmann machine with hidden variables is written as ψh,θ(xo) = log ∑ xh exp(ψθ,2(x)), (3) where ∑ xh is the summation with respect to the hidden variable xh. Let us assume that a dataset D = {xi}n i=1 generated by an underlying distribution p(x), is given and Z be a set of all patterns which appear in the dataset D. An empirical distribution ˜p(x) associated with the dataset D is defined as ˜p(x) = { nx n x ∈Z, 0 otherwise, where nx = ∑n i=1 I(xi = x) is a number of pattern x appeared in the dataset D. Definition 1. For the unnormalized model (2) and distributions p(x) and ˜p(x) in P , probability functions rα,θ(x) and ˜rα,θ(x) on X are defined by rα,θ(x) = p(x)αqθ(x)1−α ⟨ pαq1−α θ ⟩ , ˜rα,θ(x) = ˜p(x)αqθ(x)1−α ⟨ ˜pαq1−α θ ⟩ . 2 The distribution rα,θ (˜rα,θ) is an e-mixture model of the unnormalized model (2) and p(x) (˜p(x)) with ratio α [11]. Remark 2. We observe that r0,θ(x) = ˜r0,θ(x) = ¯qθ(x), r1,θ(x) = p(x), ˜r1,θ(x) = ˜p(x). Also if p(x) = ¯qθ0(x), rα,θ0(x) = ¯qθ0(x) holds for an arbitrary α. To estimate the parameter θ of probabilistic model ¯qθ, the MLE defined by ˆθmle = argmaxθ L(θ) is frequently employed, where L(θ) = ∑n i=1 log ¯qθ(xi) is the log-likelihood of the parameter θ with the model ¯qθ. Though the MLE is asymptotically consistent and efficient estimator, a main drawback of the MLE is that computational cost for probabilistic models on the discrete space sometimes becomes exponential. Unfortunately the MLE does not have an explicit solution in general, the estimation of the parameter can be done by the gradient based optimization with a gradient ⟨˜pψ′ θ⟩− ⟨¯qθψ′ θ⟩of log-likelihood, where ψ′ θ = ∂ψθ ∂θ . While the first term can be easily calculated, the second term includes calculation of the normalization term Zθ, which requires 2d times summation for X = {+1, −1}d and is not feasible when d is large. 3 Homogeneous Divergences for Statistical Inference Divergences are an extension of the squared distance and are often used in statistical inference. A formal definition of the divergence D(f, g) is a non-negative valued function on M×M or on P×P such that D(f, f) = 0 holds for arbitrary f. Many popular divergences such as the Kullback-Leilber (KL) divergence defined on P × P enjoy the coincidence axiom, i.e., D(f, g) = 0 leads to f = g. The parameter in the statistical model ¯qθ is estimated by minimizing the divergence D(˜p, ¯qθ), with respect to θ. In the statistical inference using unnormalized models, the coincidence axiom of the divergence is not suitable, since the probability and the unnormalized model do not exactly match in general. Our purpose is to estimate the underlying distribution up to a constant factor using unnormalized models. Hence, divergences having the property of the weak coincidence axiom, i.e., D(f, g) = 0 if and only if g = cf for some c > 0, are good candidate. As a class of divergences with the weak coincidence axiom, we focus on homogeneous divergences that satisfy the equality D(f, g) = D(f, cg) for any f, g ∈M and any c > 0. A representative of homogeneous divergences is the pseudo-spherical (PS) divergence [13], or in other words, γ-divergence [14], that is defined from the H¨older inequality. Assume that γ is a positive constant. For all non-negative functions f, g in M, the H¨older inequality ⟨ f γ+1⟩ 1 γ+1 ⟨ gγ+1⟩ γ γ+1 −⟨fgγ⟩≥0 holds. The inequality becomes an equality if and only if f and g are linearly dependent. The PSdivergence Dγ(f, g) for f, g ∈M is defined by Dγ(f, g) = 1 1 + γ log ⟨ f γ+1⟩ + γ 1 + γ log ⟨ gγ+1⟩ −log ⟨fgγ⟩, γ > 0. (4) The PS divergence is homogeneous, and the H¨older inequality ensures the non-negativity and the weak coincidence axiom of the PS-divergence. One can confirm that the scaled PS-divergence, γ−1Dγ, converges to the extended KL-divergence defined on M×M, as γ →0. The PS-divergence is used to obtain a robust estimator [14]. As shown in (4), the standard PS-divergence from the empirical distribution ˜p to the unnormalized model qθ requires the computation of ⟨qγ+1 θ ⟩, that may be infeasible in our setup. To circumvent such an expensive computation, we employ a trick and substitute a model ˜pqθ localized by the empirical distribution for qθ, which makes it possible to replace the total sum in ⟨qγ+1 θ ⟩with the empirical mean. More precisely, let us consider the PS-divergence from f = (pαq1−α) 1 1+γ to g = (pα′q1−α′) 1 1+γ for the probability distribution p ∈P and the unnormalized model q ∈M, where α, α′ are two distinct real numbers. Then, the divergence vanishes if and only if pαq1−α ∝ pα′q1−α′, i.e., q ∝p. We define the localized PS-divergence Sα,α′,γ(p, q) by Sα,α′,γ(p, q) = Dγ((pαq1−α)1/(1+γ), (pα′q1−α′)1/(1+γ)) = 1 1 + γ log ⟨ pαq1−α⟩ + γ 1 + γ log⟨pα′q1−α′⟩−log ⟨ pβq1−β⟩ , (5) 3 where β = (α + γα′)/(1 + γ). Substituting the empirical distribution ˜p into p, the total sum over X is replaced with a variant of the empirical mean such as ⟨ ˜pαq1−α⟩ = ∑ x∈Z ( nx n )α q1−α(x) for a non-zero real number α. Since Sα,α′,γ(p, q) = Sα′,α,1/γ(p, q) holds, we can assume α > α′ without loss of generality. In summary, the conditions of the real parameters α, α′, γ are given by γ > 0, α > α′, α ̸= 0, α′ ̸= 0, α + γα′ ̸= 0, where the last condition denotes β ̸= 0. Let us consider another aspect of the computational issue about the localized PS-divergence. For the probability distribution p and the unnormalized exponential model qθ, we show that the localized PS-divergence Sα,α′,γ(p, qθ) is convex in θ, when the parameters α, α′ and γ are properly chosen. Theorem 1. Let p ∈P be any probability distribution, and let qθ be the unnormalized exponential model qθ(x) = exp(θT ϕ(x)), where ϕ(x) is any vector-valued function corresponding to the sufficient statistic in the (normalized) exponential model ¯qθ. For a given β, the localized PS-divergence Sα,α′,γ(p, qθ) is convex in θ for any α, α′, γ satisfying β = (α + γα′)/(1 + γ) if and only if β = 1. Proof. After some calculation, we have ∂2 log⟨pαq1−α θ ⟩ ∂θ∂θT = (1 −α)2Vrα,θ[ϕ], where Vrα,θ[ϕ] is the covariance matrix of ϕ(x) under the probability rα,θ(x). Thus, the Hessian matrix of Sα,α′,γ(p, qθ) is written as ∂2 ∂θ∂θT Sα,α′,γ(p, qθ) = (1 −α)2 1 + γ Vrα,θ[ϕ] + γ(1 −α′)2 1 + γ Vrα′,θ[ϕ] −(1 −β)2Vrβ,θ[ϕ]. The Hessian matrix is non-negative definite if β = 1. The converse direction is deferred to the supplementary material. Up to a constant factor, the localized PS-divergence with β = 1 characterized by Theorem 1 is denotes as Sα,α′(p, q) that is defined by Sα,α′(p, q) = 1 α −1 log ⟨ pαq1−α⟩ + 1 1 −α′ log⟨pα′q1−α′⟩ for α > 1 > α′ ̸= 0. The parameter α′ can be negative if p is positive on X. Clearly, Sα,α′(p, q) satisfies the homogeneity and the weak coincidence axiom as well as Sα,α′,γ(p, q). 4 Estimation with the localized pseudo-spherical divergence Given the empirical distribution ˜p and the unnormalized model qθ, we define a novel estimator with the localized PS-divergence Sα,α′,γ (or Sα,α′). Though the localized PS-divergence plugged-in the empirical distribution is not well-defined when α′ < 0, we can formally define the following estimator by restricting the domain X to the observed set of examples Z, even for negative α′: ˆθ = argmin θ Sα,α′,γ(˜p, qθ) (6) = argmin θ 1 1 + γ log ∑ x∈Z (nx n )α qθ(x)1−α + γ 1 + γ log ∑ x∈Z (nx n )α′ qθ(x)1−α′ −log ∑ x∈Z (nx n )β qθ(x)1−β. Remark 3. The summation in (6) is defined on Z and then is computable even when α, α′, β < 0. Also the summation includes only Z(≤n) terms and its computational cost is O(n). Proposition 1. For the unnormalized model (2), the estimator (6) is Fisher consistent. Proof. We observe ∂ ∂θ Sα,α′,γ(¯qθ0, qθ) θ=θ0 = ( β −α + γα′ 1 + γ ) ⟨ ¯qθ0ψ′ θ0 ⟩ = 0 implying the Fisher consistency of ˆθ. 4 Theorem 2. Let qθ(x) be the unnormalized model (2), and θ0 be the true parameter of underlying distribution p(x) = ¯qθ0(x). Then an asymptotic distribution of the estimator (6) is written as √n(ˆθ −θ0) ∼N(0, I(θ0)−1) where I(θ0) = V¯qθ0[ψ′ θ0] is the Fisher information matrix. Proof. We shall sketch a proof and the detailed proof is given in supplementary material. Let us assume that the empirical distribution is written as ˜p(x) = ¯qθ0(x) + ϵ(x). Note that ⟨ϵ⟩= 0 because ˜p, ¯qθ0 ∈P. The asymptotic expansion of the equilibrium condition for the estimator (6) around θ = θ0 leads to 0 = ∂ ∂θ Sα,α′,γ(˜p, qθ) θ=ˆθ = ∂ ∂θ Sα,α′,γ(˜p, qθ) θ=θ0 + ∂2 ∂θ∂θT Sα,α′,γ(˜p, qθ) θ=θ0 (ˆθ −θ0) + O(||ˆθ −θ0||2) By the delta method [15], we have ∂ ∂θ Sα,α′,γ(˜p, qθ) θ=θ0 −∂ ∂θ Sα,α′,γ(p, qθ) θ=θ0 ≃− γ (1 + γ)2 (α −α′)2 ⟨ ψ′ θ0ϵ ⟩ and from the central limit theorem, we observe that √n ⟨ ψ′ θ0ϵ ⟩ = √n 1 n n ∑ i=1 ( ψ′ θ0(xi) − ⟨ ¯qθ0ψ′ θ0 ⟩) asymptotically follows the normal distribution with mean 0, and variance I(θ0) = V¯qθ0 [ψ′ θ0], which is known as the Fisher information matrix. Also from the law of large numbers, we have ∂2 ∂θ∂θT Sα,α′,γ(˜p, qθ) θ=θ0 (ˆθ −θ0) → γ (1 + γ)2 (α −α′)2I(θ0), in the limit of n →∞. Consequently, we observe that (2). Remark 4. The asymptotic distribution of (6) is equal to that of the MLE, and its variance does not depend on α, α′, γ. Remark 5. As shown in Remark 1, the normalized model (1) is a special case of the unnormalized model (2) and then Theorem 2 holds for the normalized model. 5 Characterization of localized pseudo-spherical divergence Sα,α′ Throughout this section, we assume that β = 1 holds and investigate properties of the localized PSdivergence Sα,α′. We discuss influence of selection of α, α′ and characterization of the localized PS-divergence Sα,α′ in the following subsections. 5.1 Influence of selection of α, α′ We investigate influence of selection of α, α′ for the localized PS-divergence Sα,α′ with a view of the estimating equation. The estimator ˆθ derived from Sα,α′ satisfies ∂Sα,α′(˜p, qθ) ∂θ θ=ˆθ ∝ ⟨ ˜rα′,ˆθψ′ ˆθ ⟩ − ⟨ ˜rα,ˆθψ′ ˆθ ⟩ = 0. (7) which is a moment matching with respect to two distributions ˜rα,θ and ˜rα′,θ (α, α′ ̸= 0, 1). On the other hand, the estimating equation of the MLE is written as ∂L(θ) ∂θ θ=θmle ∝ ⟨ ˜pψ′ θmle ⟩ −⟨¯qθmleψθmle⟩= ⟨ ˜r1,θmleψ′ θmle ⟩ − ⟨ ˜r0,θmleψ′ θmle ⟩ = 0, (8) which is a moment matching with respect to the empirical distribution ˜p = ˜r1,θmle and the normalized model ¯qθ = ˜r0,θmle. While the localized PS-divergence Sα,α′ is not defined with (α, α′) = (0, 1), comparison of (7) with (8) implies that behavior the estimator ˆθ becomes similar to that of the MLE in the limit of α →1 and α′ →0. 5 5.2 Relationship with the α-divergence The α-divergence between two positive measures f, g ∈M is defined as Dα(f, g) = 1 α(1 −α) ⟨ αf + (1 −α)g −f αg1−α⟩ , where α is a real number. Note that Dα(f, g) ≥0 and 0 if and only if f = g, and the α-divergence reduces to KL(f, g) and KL(g, f) in the limit of α →1 and 0, respectively. Remark 6. An estimator defined by minimizing α-divergence Dα(˜p, ¯qθ) between the empirical distribution and normalized model, satisfies ∂Dα(˜p, ¯qθ) ∂θ ∝ ⟨ ˜pαq1−α θ (ψ′ θ −⟨¯qθψ′ θ⟩) ⟩ = 0 and requires calculation proportional to |X| which is infeasible. Also the same hold for an estimator defined by minimizing α-divergence Dα(˜p, qθ) between the empirical distribution and unnormalized model, satisfying ∂Dα(˜p,qθ) ∂θ ∝ ⟨ (1 −α)qθψ′ θ −˜pαq1−α θ ⟩ = 0. Here, we assume that α, α′ ̸= 0, 1 and consider a trick to cancel out the term ⟨g⟩by mixing two α-divergences as follows. Dα,α′(f, g) =Dα(f, g) + (−α′ α ) Dα′(f, g) = ⟨( 1 1 −α − α′ α(1 −α′) ) f − 1 α(1 −α)f αg1−α + 1 α(1 −α′)f α′g1−α′⟩ . Remark 7. Dα,α′(f, g) ≥0 is divergence when αα′ < 0 holds, i.e., Dα,α′(f, g) ≥0 and Dα,α′(f, g) = 0 if and only if f = g. Without loss of generality, we assume α > 0 > α′ for Dα,α′. Firstly, we consider an estimator defined by the minmizer of min θ ∑ x∈Z { 1 1 −α′ (nx n )α′ qθ(x)1−α′ − 1 1 −α (nx n )α qθ(x)1−α } . (9) Note that the summation in (9) includes only Z(≤n) terms. We remark the following. Remark 8. Let ¯qθ0(x) be the underlying distribution and qθ(x) be the unnormalized model (2). Then an estimator defined by minimizing Dα,α′(¯qθ0, qθ) is not in general Fisher consistent, i.e., ∂Dα,α′(¯qθ0, qθ) ∂θ θ=θ0 ∝ ⟨ ¯qα′ θ0q1−α′ θ0 ψ′ θ0 −¯qα θ0q1−α θ0 ψ′ θ0 ⟩ = ( ⟨qθ0⟩−α′ −⟨qθ0⟩−α) ⟨ qθ0ψ′ θ0 ⟩ ̸= 0. This remark shows that an estimator associated with Dα,α′(˜p, qθ) does not have suitable properties such as (asymptotic) unbiasedness and consistency while required computational cost is drastically reduced. Intuitively, this is because the (mixture of) α-divergence satisfies the coincidence axiom. To overcome this drawback, we consider the following minimization problem for estimation of the parameter θ of model ¯qθ(x). (ˆθ, ˆr) = argmin θ,r Dα,α′(˜p, rqθ) where r is a constant corresponding to an inverse of the normalization term Zθ = ⟨qθ⟩. Proposition 2. Let qθ(x) be the unnormalized model (2). For α > 1 and 0 > α′, the minimization of Dα,α′(˜p, rqθ) is equivalent to the minimization of Sα,α′(˜p, qθ). Proof. For a given θ, we observe that ˆrθ = argmin r Dα,α′(˜p, rqθ) = ( ⟨ ˜pαq1−α θ ⟩ ⟨ ˜pα′q1−α′ θ ⟩ ) 1 α−α′ . (10) 6 Note that computation of (10) requires only sample order O(n) calculation. By plugging (10) into Dα,α′(˜p, rqθ), we observe ˆθ = argmin θ Dα,α′(˜p, ˆrθqθ) = argmin θ Sα,α′(˜p, qθ). (11) If α > 1 and α′ < 0 hold, the estimator (11) is equivalent to the estimator associated with the localized PS-divergence Sα,α′, implying that Sα,α′ is characterized by the mixture of α-divergences. Remark 9. From a viewpoint of the information geometry [11], a metric (information geometrical structure) induced by the α-divergence is the Fisher metric induced by the KL-divergence. This implies that the estimation based on the (mixture of) α-divergence is Fisher efficient and is an intuitive explanation of the Theorem 2. The localized PS divergence Sα,α′,γ and Sα,α′ with αα′ > 0 can be interpreted as an extension of the α-divergence, which preserves Fisher efficiency. 6 Experiments We especially focus on a setting of β = 1, i.e., convexity of the risk function with the unnormalized model exp(θT ϕ(x)) holds (Theorem 1) and examined performance of the proposed estimator. 6.1 Fully visible Boltzmann machine In the first experiment, we compared the proposed estimator with parameter settings (α, α′) = (1.01, 0.01), (1.01, −0.01), (2, −1), with the MLE and the ratio matching method [8]. Note that the ratio matching method also does not require calculation of the normalization constant, and the proposed method with (α, α′) = (1.01, ±0.01) may behave like the MLE as discussed in section 5.1. All methods were optimized with the optim function in R language [16]. The dimension d of input was set to 10 and the synthetic dataset was randomly generated from the second order Boltzmann machine (Example 2) with a parameter θ∗∼N(0, I). We repeated comparison 50 times and observed averaged performance. Figure 1 (a) shows median of the root mean square errors (RMSEs) between θ∗and ˆθ of each method over 50 trials, against the number n of examples. We observe that the proposed estimator works well and is superior to the ratio matching method. In this experiment, the MLE outperforms the proposed method contrary to the prediction of Theorem 2. This is because observed patterns were only a small portion of all possible patterns, as shown in Figure 1 (b). Even in such a case, the MLE can take all possible patterns (210 = 1024) into account through the normalization term log Zθ ≃Const + 1 2||θ||2 that works like a regularizer. On the other hand, the proposed method genuinely uses only the observed examples, and the asymptotic analysis would not be relevant in this case. Figure 1 (c) shows median of computational time of each method against n. The computational time of the MLE does not vary against n because the computational cost is dominated by the calculation of the normalization constant. Both the proposed estimator and the ratio matching method are significantly faster than the MLE, and the ratio matching method is faster than the proposed estimator while the RMSE of the proposed estimator is less than that of the ratio matching. 6.2 Boltzmann machine with hidden variables In this subsection, we applied the proposed estimator for the Boltzmann machine with hidden variables whose associated function is written as (3). The proposed estimator with parameter settings (α, α′) = (1.01, 0.01), (1.01, −0.01), (2, −1) was compared with the MLE. The dimension d1 of observed variables was fixed to 10 and d2 of hidden variables was set to 2, and the parameter θ∗ was generated as θ∗∼N(0, I) including parameters corresponding to hidden variables. Note that the Boltzmann machine with hidden variables is not identifiable and different values of the parameter do not necessarily generate different probability distributions, implying that estimators are influenced by local minimums. Then we measured performance of each estimator by the averaged 7 0 5000 10000 15000 20000 25000 0.1 0.2 0.5 1.0 n RMSE MLE Ratio matching a1=1.01,a2=0.01 a1=1.01,a2=−0.01 a1=2,a2=−1 100 200 400 800 1600 3200 6400 12800 25600 0 50 100 150 200 250 300 n Number |Z| of unique patterns 0 5000 10000 15000 20000 25000 2 5 10 20 50 100 200 500 n Time[s] MLE Ratio matching a1=1.01,a2=0.01 a1=1.01,a2=−0.01 a1=2,a2=−1 Figure 1: (a) Median of RMSEs of each method against n, in log scale. (b) Box-whisker plot of number |Z| of unique patterns in the dataset D against n. (c) Median of computational time of each method against n, in log scale. log-likelihood 1 n ∑n i=1 log ¯qˆθ(xi) rather than the RMSE. An initial value of the parameter was set by N(0, I) and commonly used by all methods. We repeated the comparison 50 times and observed the averaged performance. Figure 2 (a) shows median of averaged log-likelihoods of each method over 50 trials, against the number n of example. We observe that the proposed estimator is comparable with the MLE when the number n of examples becomes large. Note that the averaged log-likelihood of MLE once decreases when n is samll, and this is due to overfitting of the model. Figure 2 (b) shows median of averaged log-likelihoods of each method for test dataset consists of 10000 examples, over 50 trials. Figure 2 (c) shows median of computational time of each method against n, and we observe that the proposed estimator is significantly faster than the MLE. 0 5000 10000 15000 20000 25000 −15 −10 −5 n Averaged Log likelihood MLE a1=1.01,a2=0.01 a1=1.01,a2=−0.01 a1=2,a2=−1 0 5000 10000 15000 20000 25000 −15 −10 −5 n Averaged Log likelihood MLE a1=1.01,a2=0.01 a1=1.01,a2=−0.01 a1=2,a2=−1 0 5000 10000 15000 20000 25000 5 10 20 50 100 200 500 1000 n Time[s] MLE a1=1.01,a2=0.01 a1=1.01,a2=−0.01 a1=2,a2=−1 Figure 2: (a) Median of averaged log-likelihoods of each method against n. (b) Median of averaged log-likelihoods of each method calculated for test dataset against n. (c) Median of computational time of each method against n, in log scale. 7 Conclusions We proposed a novel estimator for probabilistic model on discrete space, based on the unnormalized model and the localized PS-divergence which has the homogeneous property. The proposed estimator can be constructed without calculation of the normalization constant and is asymptotically efficient, which is the most important virtue of the proposed estimator. Numerical experiments show that the proposed estimator is comparable to the MLE and required computational cost is drastically reduced. 8 References [1] Hinton, G. E. & Sejnowski, T. J. (1986) Learning and relearning in boltzmann machines. MIT Press, Cambridge, Mass, 1:282–317. [2] Ackley, D. H., Hinton, G. E. & Sejnowski, T. J. (1985) A learning algorithm for boltzmann machines. Cognitive Science, 9(1):147–169. [3] Amari, S., Kurata, K. & Nagaoka, H. (1992) Information geometry of Boltzmann machines. In IEEE Transactions on Neural Networks, 3: 260–271. [4] Hinton, G. E. & Salakhutdinov, R. R. (2012) A better way to pretrain deep boltzmann machines. In Advances in Neural Information Processing Systems, pp. 2447–2455 Cambridge, MA: MIT Press. [5] Opper, M. & Saad, D. (2001) Advanced Mean Field Methods: Theory and Practice. MIT Press, Cambridge, MA. [6] Hinton, G.E. (2002) Training Products of Experts by Minimizing Contrastive Divergence. Neural Computation, 14(8):1771–1800. [7] Hyv¨arinen, A. (2005) Estimation of non-normalized statistical models by score matching. Journal of Machine Learning Research, 6:695–708. [8] Hyv¨arinen, A. (2007) Some extensions of score matching. Computational statistics & data analysis, 51(5):2499–2512. [9] Dawid, A. P., Lauritzen, S. & Parry, M. (2012) Proper local scoring rules on discrete sample spaces. The Annals of Statistics, 40(1):593–608. [10] Gutmann, M. & Hirayama, H. (2012) Bregman divergence as general framework to estimate unnormalized statistical models. arXiv preprint arXiv:1202.3727. [11] Amari, S & Nagaoka, H. (2000) Methods of Information Geometry, volume 191 of Translations of Mathematical Monographs. Oxford University Press. [12] Sejnowski, T. J. (1986) Higher-order boltzmann machines. In American Institute of Physics Conference Series, 151:398–403. [13] Good, I. J. (1971) Comment on “measuring information and uncertainty,” by R. J. Buehler. In Godambe, V. P. & Sprott, D. A. editors, Foundations of Statistical Inference, pp. 337–339, Toronto: Holt, Rinehart and Winston. [14] Fujisawa, H. & Eguchi, S. (2008) Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis, 99(9):2053–2081. [15] Van der Vaart, A. W. (1998) Asymptotic Statistics. Cambridge University Press. [16] R Core Team. (2013) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. 9 | 2015 | 97 |
5,991 | Enforcing balance allows local supervised learning in spiking recurrent networks Ralph Bourdoukan Group For Neural Theory, ENS Paris Rue dUlm, 29, Paris, France ralph.bourdoukan@gmail.com Sophie Deneve Group For Neural Theory, ENS Paris Rue dUlm, 29, Paris, France sophie.deneve@ens.fr Abstract To predict sensory inputs or control motor trajectories, the brain must constantly learn temporal dynamics based on error feedback. However, it remains unclear how such supervised learning is implemented in biological neural networks. Learning in recurrent spiking networks is notoriously difficult because local changes in connectivity may have an unpredictable effect on the global dynamics. The most commonly used learning rules, such as temporal back-propagation, are not local and thus not biologically plausible. Furthermore, reproducing the Poisson-like statistics of neural responses requires the use of networks with balanced excitation and inhibition. Such balance is easily destroyed during learning. Using a top-down approach, we show how networks of integrate-and-fire neurons can learn arbitrary linear dynamical systems by feeding back their error as a feed-forward input. The network uses two types of recurrent connections: fast and slow. The fast connections learn to balance excitation and inhibition using a voltage-based plasticity rule. The slow connections are trained to minimize the error feedback using a current-based Hebbian learning rule. Importantly, the balance maintained by fast connections is crucial to ensure that global error signals are available locally in each neuron, in turn resulting in a local learning rule for the slow connections. This demonstrates that spiking networks can learn complex dynamics using purely local learning rules, using E/I balance as the key rather than an additional constraint. The resulting network implements a given function within the predictive coding scheme, with minimal dimensions and activity. The brain constantly predicts relevant sensory inputs or motor trajectories. For example, there is evidence that neural circuits mimic the dynamics of motor effectors using internal models [1]. If the dynamics of the predicted sensory and motor variables change in time, these models may become false [2] and therefore need to be readjusted through learning based on error feedback. From a modeling perspective, supervised learning in recurrent networks faces many challenges. Earlier models have succeeded in learning useful functions at the cost of non local learning rules that are biologically implausible [3, 4]. More recent models based on reservoir computing [5–7] transfer the learning from the recurrent network (with now “random”, fixed weights) to the readout weights. Using this simple scheme, the network can learn to generate complex patterns. However, the majority of these models use abstract rate units and are yet to be translated into more realistic spiking networks. Moreover, to provide a sufficiently large reservoir, the recurrent network needs to be large, balanced and have a rich and high dimensional dynamics. This typically generates far more activity than strictly required, a redundancy that can be seen as inefficient. On the other hand, supervised learning models involving spiking neurons have essentially concentrated on the learning of precise spike sequences [8–10]. With some exceptions [10,11] these models use feed-forward architectures [12]. In a balanced recurrent network with asynchronous, irregular and highly variable spike trains, such as those found in cortex, the activity has been shown to be 1 chaotic [13,14]. This leads to spike timing being intrinsically unreliable, rendering a representation of the trajectory by precise spike sequences problematic. Moreover, many configurations of spike times may achieve the same goal [15]. Here we derive two local learning rules that drive a network of leaky integrate-and-fire (LIF) neurons into implementing a desired linear dynamical system. The network is trained to minimize the objective ∥x(t) −ˆx(t)∥2 + H(r), Where ˆx(t) is the output of the network decoded from the spikes, x(t) is the desired output, and H(r) is a cost associated with firing (penalizing unnecessary activity, and thus enforcing efficiency). The dynamical system is linear, ˙x = Ax + c, with A being a constant matrix and c a time varying command signal. We first study the learning of an autoencoder, i.e., a network where the desired output is fed to the network as a feedforward input. The autoencoder learns to represent its inputs as precisely as possible in an unsupervised fashion. After learning, each unit represents the encoding error made by the entire network. We then show that the network can learn more complex computations if slower recurrent connections are added to the autoencoder. Thus, it receives the command c along with an error signal and learns to generate the output ˆx with the desired temporal dynamics. Despite the spike-based nature of the representation and of the plasticity rules, the learning does not enforce precise spike timing trajectories but, on the contrary, enforces irregular and highly variable spike trains. 1 Learning a balance : global becomes local Using a predictive coding strategy [15–17], we build a network that learns to efficiently represent its inputs while expending the least amount of spikes. To introduce the learning rules and explain how they work, we start by describing the optimized network (after learning). Let us first consider a set of unconnected integrate-and-fire neurons receiving shared input signals x = (xi) through feedforward connections F = (Fji). We assume that the network performs predictive coding, i.e. it subtracts from each of these input signals an estimate ˆx obtained by decoding the output spike trains (fig 1A). Specifically, ˆxi = P Dijrj, where D = (Dij) are the decoding weights and r = (rj) are the filtered spike trains which obey ˙rj = −λrj + oj with oj(t) = P k δ(t −tk j ) being the spike train of neuron j and tk j are the times of its spikes. Note that such an autoencoder automatically maintains an accurate representation, because it responds to any encoding error larger than the firing threshold by increasing its response and in turn decreasing the error. It is also efficient, because neurons respond only when input and decoded signals differ. The autoencoder can be equivalently implemented by lateral connections, rather than feedback targeting the inputs (fig 1A). These lateral connections combine the feedforward connections and the decoding weights and they subtract from the feedforward inputs received by each neuron. The membrane potential dynamics in this recurrent network are described by: ˙V = −λV + Fs + Wo (1) where V is the vector of the membrane potentials of the population, s = ˙x + λx is the effective input to the population, W = −FD is the connectivity matrix, and o is the population vector of the spike. Neuron i has threshold Ti = ∥Fi∥2/2 [15]. When input channels are independent and the feed-forward weights are distributed uniformly on a sphere then the optimal decoding weights D are equal to the encoding weights F and hence the optimal recurrent connectivity W = −FFT [17]. In the following we assume that this is always the case and we choose the feedforward weights accordingly. In this auto-encoding scheme having a precise representation of the inputs is equivalent to maintaining a precise balance between excitation and inhibition. In fact, the membrane potential of a neuron is the projection of the global error of the network on the neurons’s feedforward weight (Vi = Fi(x −ˆx) [15]). If the output of the network matches the input, the recurrent term in the membrane potential, Fiˆx, should precisely cancel the feedforward term Fix. Therefore, in order to learn the connectivity matrix W, we tackle the problem through balance, which is its physiological characterization. The learning rule that we derive achieves efficient coding by enforcing a precise balance at a single neuron level. The learning rule makes the network converge to a state where each presynaptic spike cancels the recent charge that was accumulated by the postsynaptic neuron (Fig 1B). This accumulation of charge is naturally represented by the postsynaptic membrane potential Vi, which jumps upon the arrival of a presynaptic spike by a magnitude given by the recurrent weight 2 δW = 0 δW > 0 + - ˆx x Wf = −FD x ˆx F D F D -20 0 20 -20 0 20 1000 2000 3000 4000 10 20 1000 2000 3000 4000 1000 2000 3000 4000 10 20 1000 2000 3000 4000 Neuron index x1, ˆx1 x2, ˆx2 Before After −Fiˆx Wir Wir = −Fiˆx Fix Balanced Unbalanced A B C D E 101 102 103 10-5 100 EW Time(s) Vpost 200 ms Figure 1: A: a network preforming predictive coding. Top panel: a set of unconnected leaky integrate-and-fire neurons receiving the error between a signal and their own decoded spike trains. Bottom panel: the previous architecture is equivalent to the recurrent network with lateral connections equal to the product of the encoding and the decoding weights. B: illustration of the learning of an inhibitory weight. The trace of the membrane potential of a postsynaptic neuron is shown in blue and red. The blue lines correspond to changes due to the integration of the feedforward input, and the red to changes caused by the integration of spikes from neurons in the population. The black line represents the resting potential of the neuron. In the right panel the presynaptic spike perfectly cancels the accumulated feedforward current during a cycle and therefore there is no learning. In the left panel the inhibitory weight is too strong and thus creates imbalance in the membrane potential; therefore, it is depressed by learning. C: learning in a 20 neuron network. Top panels: the two dimensions of the input (blue lines) and the output (red lines) before (left) and after (right) learning. Bottom panels: raster plots of the spikes in the population. D: left panel: after learning each neuron receives a local estimate of the output of the network through lateral connections (red arrows). right panel: scatter plot of the output of the network projected on the feedforward weights of the neurons versus the recurrent input they receive. E: the evolution of the mean error between the recurrent weights of the network and the optimal recurrent weights −FFT using the rule defined by equation 2 (black line) and the rule in [16] (gray line). Note that our rule is different from [16] because it operates on a a finer time-scale and reaches the optimal balanced state with more than one order of magnitude faster. This speed-up is important because, as we will see below, some computations require a very fast restoration of this balance. Wij due to the instantaneous nature of recurrent synapses. Because the two charges should cancel each other, the greedy learning rule is proportional to the sum of both quantities: δWij ∝−(Vi + βWij) (2) where Vi is the membrane potential of the postsynaptic neuron, Wij is the recurrent weight from neuron j to neuron i, and the factor β controls the overall magnitude of lateral weights and, therefore, the total spike count in the population. More importantly, β regularizes the cost penalizing the total spike count in the population (i.e. H(r) = µ P i ri where µ is the effective linear cost [15]). The example of an inhibitory synapse Wij < 0 is illustrated in figure 1B. If neuron i is too hyperpolarized upon the arrival of a presynaptic spike from neuron j, i.e., if the inhibitory weight Wij is smaller 3 than −Vi/β, the absolute weight of the synapse (the amplitude of the IPSP) is decreased. The opposite occurs if the membrane is too depolarized. The synaptic weights thus converge when the two quantities balance each other on average Wij = −⟨Vi⟩tj/β, where tj are the spike times of the presynaptic neuron j. Fig 1C shows the learning in a 20-neuron network receiving random input signals. For illustration purposes the weights are initialized with very small values. Before learning, the lack of lateral connectivity causes neurons to fire synchronously and regularly. After learning, spike trains are sparse, irregular and asynchronous, despite the quasi absence of noise in the network. Even though the firing rates decrease globally, the quality of the input representation drastically improves over the course of learning. Moreover, the convergence of recurrent weights to their optimal values is typically quick and monotonic (Fig 1E). By enforcing balance, the learning rule establishes an efficient and reliable communication between neurons. Because V = Fx −FFT r = F(x −ˆx), every neuron has access - through its recurrent input - to the network’s global coding error projected on its feedforward weight (Fig 1D). This local representation of network’s the global performance is crucial in the supervised learning scheme we describe in the following sections. 2 Generating temporal dynamics within the network While in the previous section we presented a novel rule that drives a spiking network into efficiently representing its inputs, we are generally interested in networks that perform more complex computations. It has been shown already that a network having two synaptic time scales can implement an arbitrary linear dynamical system [15]. We briefly summarize this approach in this section. A ˙x + λx ˆx +c Ax + λx Aˆx + λˆx +c ˆx ˆx +c (A + λI)ˆx F FT + + +c F FT ˆx Wf = FFT Ws = F(A + λI)FT i ii iii iv v D E C B A Figure 2: The construction of a recurrent network that implements a linear dynamical system. In the autoencoder presented above, the effective input to the network is s = ˙x + λx (Fig 2A). We assume that x follows linear dynamics ˙x = Ax + c, where A is a constant matrix and c(t) is a time varying command. Thus, the input can be expanded to s = Ax + c + λx = (A + λI)x + c (Fig 2B). Because the output of the network ˆx approximates x very precisely, they can be interchanged. According to this self-consistency argument, the external input term (A + λI)x is replaced by (A + λI)ˆx which only depends on the activity of the network (Fig 2C). This replacement amounts to including a global loop that adds the term (A + λI)ˆx to the source input (Fig 2D). As in the autoencoder, this can be achieved using recurrent connections in the form of F(A + λI)FT (Fig 2E). Note that this recurrent input is the filtered spike train r, not the raw spikes o. As a result, these new connections have slower dynamics than the connections presented in the first section. This motivates us to characterize connections as fast and slow depending on their underlying dynamics. The dynamics of the membrane potentials are now described by: ˙V = −λV V + Fc + Wsr + Wfo (3) where λV is the leak in the membrane potential, it is different from the leak in the decoder λ. It is clear from the previous construction that the slow connectivity Ws = F(A + λI)FT , is involved 4 in generating the temporal dynamics of x. Owing to the slow connections, the network is able to generate autonomously the temporal dynamics of the output and thus, only needs the command c as an external input. For example, if A = 0 (i.e. the network implements a pure integrator), Ws = λFFT compensates for the leak in the decoder by generating a positive feedback term that prevents the activity form decaying. On the other hand, the fast connectivity matrix Wf = −FFT , trained with the unsupervised, voltage based rule presented previously, plays the same role as in the autoencoder; It insures that the global output and the global coding error of the network are available locally to each neuron. 3 Teaching the network to implement a desired dynamical system Our aim is to develop a supervised learning scheme where a network learns to generate a desired output using an error feedback as well as a local learning rule. The learning rule targets the slow recurrent connections responsible for the generation of the temporal dynamics in the output, as seen in the previous section. Instead of deriving directly the learning rule for the recurrent connections, we first derive a learning rule for the matrix A of the linear dynamical system using simple results from control theory, and then we translate the learning to the recurrent network. 3.1 learning a linear dynamical system online Consider the linear dynamical system ˙ˆx = Mˆx + c where M is a matrix. We derive an online learning rule for the coefficients of the matrix M, such that the output ˆx becomes after learning equal to the desired output x. The latter undergoes the dynamics ˙x = Ax + c. Therefore, we define e = x −ˆx as the error vector between the actual and the desired output. This error is fed to the mistuned system in order to correct and “guide” its behavior (Fig 3A). Thus, the dynamics of the system with this feedback are ˙ˆx = Mˆx + c + K(x −ˆx), where K is a scalar implementing the gain of the loop. The previous equation can be rewritten in the following form: ˙ˆx = (M −KI)ˆx + c + Kx (4) where I is the identity matrix. If we assume that the spectra of the signals are bounded, it is straightforward to show, via a Laplace transform, that ˆx →x when K →+∞. The larger the gain of the feedback, the smaller the error. Intuitively, if K is large, very small errors are immediately detected and therefore, corrected by the system. Nevertheless our aim is not to correct the dynamical system forever, but to teach it to generate the desired output itself without the error feedback. Thus, the matrix M needs to be modified over time. To derive the learning rule for the matrix M, we operate a gradient descent on the loss function L = eT e = ∥x −ˆx∥2 with respect to the components of the matrix. The component Mij is updated proportionally to the gradient of L, δMij = −∂L ∂Mij = ( ∂ˆx ∂Mij )T e (5) To evaluate the term ∂ˆx/∂Mij, we solve the equation 4 in the simple case were inputs c are constant. If we assume that K is much larger than the eigenvalues of M, the gradient ∂ˆx/∂Mij is approximated by Eij ˆx, where Eij is a matrix of zeros except for component ij which is one. This leads to the very simple learning rule δMij ≈ˆxjei, which we can write in matrix form as: δM ∝eˆxT (6) The learning rule is simply the outer product of the output and the error. To derive the learning rule we assume constant or slowly varying input. In practice, however, learning can be achieved also using fast varying inputs (Fig 3). 3.2 learning rule for the slow connections In the previous section we derived a simple learning rule for the state matrix M of a linear dynamical system, driving it into a desired regime. We translate this learning scheme to the recurrent network described in section 2. To do this, two things have to be determined. First, we have to define the form of the error feedback in the recurrent network case. Second, we need to adapt the learning 5 rule of the matrix of the underlying dynamical system to the slow weights of the recurrent neural network. In the previous learning scheme the error is fed to the dynamical system as an additional input. Since the input/decoding weight vector of a neuron Fi defines the direction that is relevant for its “action” space, the neuron should only receive the errors that are in this direction. Thus, the error vector is projected on the feedforward weights vector of a neuron before being fed to it. The feedback weights matrix is then simply equal to the feedforward weights matrix F (Fig 3A). Accordingly, equation 3 becomes: ˙V = −λV V + Fc + Wsr + Wfo + KFe (7) In the autoencoder, the membrane potential of a neuron represents the auto-coding error made by the entire network along the direction of the neuron’s feedforward weights. With the addition of the dynamic error feedback and the slow connections, the membrane potentials now represent the error between obtained and desired network output trajectories. To translate the learning rule of the dynamical system into a rule for the recurrent network, we assume that any modification of the recurrent weights directly reflects a modification in the underlying dynamical system. This is achieved if the updates δWs of the slow connectivity matrix are in the form of F(δM)FT . This ensures that the network always implements a linear dynamical system and guarantees that the analysis is consistent. The learning rule of the slow connections Ws is obtained by replacing δM by its expression according to equation 6 in F(δM)FT : δWs ∝(Fe)(Fˆx)T (8) According to this learning rule, the weight update between two neurons, δW s ij, is proportional to the error feedback Fie received as a current by the postsynaptic neuron i and to Fjˆx, the output of the network projected on the feedforward weight of the presynaptic neuron j. The latter quantity is available to the presynaptic neuron through its inward fast recurrent connections, as shown for the autoencoder in Fig 1D. One might object that the previous learning rule is not biologically plausible because it involves currents present separately in the pre- and post-synaptic neurons. Indeed, the presynaptic term may not be available to the synapse. However, as shown in the supplementary information of [15], the filtered spike train rj of the presynaptic neuron is approximately proportional to ⌊Fjˆx⌋+, a rectified version of the presynaptic term in the previous learning rule. By replacing Fjˆx by rj in the equation 8 we obtain the following biologically plausible learning rule: δW s ij = Eirj (9) Where Ei = Fie is the total error current received by the postsynaptic neuron. 3.3 Learning the underlying dynamical system while maintaining balance For the previous analysis to hold, the fast connectivity Wf should be learned simultaneously with the slow connections using the learning rule defined by equation 2. As shown in the first section, the learning of the fast connections establishes a detailed balance on the level of the neuron and guarantees that the output of the network is available to each neuron through the term Fjˆx. The latter is the presynaptic term in the learning rule of equation 8. Despite not being involved in the dynamics per se, these fast connections are crucial in order to learn any temporal dynamics. In other words, learning a detailed balance is a pre-requirement to learn dynamics with local plasticity rules in a spiking network. The plasticity of the fast connections restores very quickly any perturbation to the balance caused by the learning of the slow connections. 3.4 Simulation As a toy example, we simulated a 20-neuron network learning a 2D-damped oscillator using a feedback gain K = 100. The network is initialized with weak fast connections and weak slow connections. The learning is driven by smoothed gaussian noise as the command c. Note that in the initial state, because of the absence of fast recurrent connections, the output of the network does not depend linearly on the input because membrane potentials are hyperpolarized (Fig 3B). The network’s output is quickly linearized through the learning of the fast connections (equation 2 by enforcing a 6 M ˆx x c Ke + + - + c ˆx x Ke + - F F FT A C D M.P. 100 102 104 0 4 8 -8 -4 0 150 ms 5000 10000 15000 1 10 20 5000 10000 15000 Neuron index x1, ˆx1 x2, ˆx2 -1 0 1 -1 0 1 -102 0 102 -102 0 102 300 ms Predicted Learned Learned Wf Ws 100 102 104 0 4 8 -8 -4 0 B Error Time(s) Figure 3: Learning temporal dynamics in a recurrent network. A, Top panel: the linear dynamical system characterized by the state matrix M, receives feedback signaling the difference between its actual output and a desired output. Bottom panel: a recurrent network displaying slow and fast connections is equivalent to the top architecture if the error feedback is fed into the network through the feedforward matrix F. B: a 20 neuron network learns using equations 9 and 2. Left panel: the evolution of the error between the desired and the actual output during learning. The black and grey arrows represent instances where the time course of the membrane potential is shown in the next plot. Right panel: the time course of the membrane potential of one neuron at two different instances during learning. The gray line corresponds to the initial state while the black line is a few iterations after. C: scatter plots of the learned versus the predicted weights at the end of learning for fast (top panel) and slow (bottom panel) connections. D, top panels: the output of the network (red) and the desired output (blue), before (left) and after (right) learning. The black solid line on the top shows the impulse command that drives the network. Bottom panels: raster plots before and after learning. In the left raster plot there is no spiking activity after the first 50 ms. balance on the membrane potential (Fig 3B): initial membrane potentials exhibit large fluctuations which reduce drastically after a few iterations (Fig 3B). On a slower time scale the slow connections learn to minimize the prediction error using the learning rule of equation 9. The error between the output of the network and the desired output decreases drastically (Fig 3B). To compute this error, different instances of the connectivity matrices were sampled during learning. The network was then re-simulated using these instances while fixing K=0 in oder to mesure the performance in the absence of feedback. At the end of learning the slow and fast connections converge to their predicted values Ws = F(A + λI)FT and Wf = −FFT (Fig 3C). The presence of the feedback is no longer required for the network to have the right dynamics (i.e. we set K = 0 and obtain the desired output (Fig 3D and 3B). The output of the network is very accurate (representing the state x with a precision of the order of the contribution of a single spike), parsimonious (i.e. it does not spend more spikes than needed to represent the dynamical state with this level of accuracy) and the spike trains are asynchronous and irregular. Note that because the slow connections are very weak in the initial state, spiking activity decays quickly after the end of the command impulse due to the absence of slow recurrent excitation (Fig 3D). 7 Simulation parameters Figure 1 : λ = 0.05, β = 0.51, learning rate: 0.01. Figure 3 : λ = 50, λV = 1, β = 0.52, K = 100, learning rate of the fast connections: 0.03, learning rate of the slow connections: 0.15. 4 Discussion Using a top-down approach we derived a pair of spike-based and current-based plasticity rules that enable precise supervised learning in a recurrent network of LIF neurons. The essence of this approach is that every neuron is a precise computational unit that represents the network error in a subspace of dimension 1 in the the output space. The precise and distributed nature of this code allows the derivation of local learning rules from global objectives. To compute collectively, the neurons need to communicate to each other about their contributions to the output of the network. The fast connections are trained in an unsupervised fashion using a spikebased rule to optimize this communication. It establishes this efficient communication by enforcing a detailed balance between excitation and inhibition. The slow connections however are trained to minimize the error between the actual output of the network and a target dynamical system. They produce currents with long temporal correlations implementing the temporal dynamics of the underlying linear dynamical system. The plasticity rule for the slow connections is simply proportional to an error feedback injected as a current in the postsynaptic neuron, and to a quantity akin to the firing rate of the presynaptic neuron. To guide the behavior of the network during learning, the error feedback must be strong and specific. Such strength and specialization is in agreement with data on climbing fibers in the cerebellum [18–20], who are believed to bring information about errors during motor learning [21]. However, in this model, the specificity of the error signals are defined by a weight matrix through which the errors are fed to the neurons. Learning these weights is still under investigation. We believe that they could be learned using a covariance-based rule. Our approach is substantially different form usual supervised learning paradigms in spiking networks since it does not target the spike times explicitly. However, observing spike times may be misleading since there are many combinations that can produce the same output [15, 16]. Thus, in this framework, variability in spiking is not a lack of precision but is the consequence of the redundancy in the representation. Neurons having similar decoding weights may have their spike times interchanged while the global representation is conserved. What is important is the cooperation between the neurons and the precise spike timing relative to the population. For example, using independent poisson neurons with instantaneous firing rates identical to the predictive coding network drastically degrades the quality of the representaion [15]. Our approach is also different from liquid computing in the sense that the network is small, structured, and fires only when needed. In addition, in these studies the feedback error used in the learning rule has no clear physiological correlate, while here it is concretely injected as a current in the neurons. This current is used simultaneously to drive the learning rule and to guide the dynamics of the neuron in the short term. However, it is still unclear what the mechanisms are that could implement such a current dependent learning rule in biological neurons. An obvious limitation of our framework is that it is currently restricted to linear dynamical systems. One possibility to overcome this limitation would be to introduce non-linearities in the decoder, which would translate into specific non-linearities and structures in the dendrites. A similar strategy has been employed recently to combine the approach of predictive coding and FORCE learning [7] using two compartment LIF neurons [22]. We are currently exploring less constraining forms of synaptic non-linearities, with the ultimate goal of being able to learn arbitrary dynamics in spiking networks using purely local plasticity rules. Acknowledgments This work was supported by ANR-10-LABX-0087 IEC, ANR-10-IDEX-0001-02 PSL, ERC grant FP7-PREDISPIKE and the James McDonnell Foundation Award - Human Cognition. 8 Refrences [1] Kawato, M. (1999). Internal models for motor control and trajectory planning. Current opinion in neurobiology, 9(6), 718-727. [2] Lackner, J. R., & Dizio, P. (1998). Gravitoinertial force background level affects adaptation to coriolis force perturbations of reaching movements. Journal of neurophysiology, 80(2), 546553. [3] Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1988). Learning representations by backpropagating errors. Cognitive modeling, 5. [4] Williams, R. J., & Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural computation, 1(2), 270-280. [5] Jaeger, H. (2001). The echo state approach to analysing and training recurrent neural networkswith an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148, 34. [6] Maass, W., Natschlger, T., & Markram, H. (2002). Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural computation, 14(11), 2531-2560. [7] Sussillo, D., & Abbott, L. F. (2009). Generating coherent patterns of activity from chaotic neural networks. Neuron, 63(4), 544-557. [8] Legenstein, R., Naeger, C., & Maass, W. (2005). What can a neuron learn with spike-timingdependent plasticity?. Neural computation, 17(11), 2337-2382. [9] Pfister, J., Toyoizumi, T., Barber, D., & Gerstner, W. (2006). Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning. Neural computation, 18(6), 1318-1348. [10] Ponulak, F., & Kasinski, A. (2010). Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting. Neural Computation, 22(2), 467510. [11] Memmesheimer, R. M., Rubin, R., lveczky, B. P., & Sompolinsky, H. (2014). Learning precisely timed spikes. Neuron, 82(4), 925-938. [12] G¨utig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timingbased decisions. Nature neuroscience, 9(3), 420-428. [13] van Vreeswijk, C., & Sompolinsky, H. (1996). Chaos in neuronal networks with balanced excitatory and inhibitory activity. Science, 274(5293), 1724-1726. [14] Brunel, N. (2000). Dynamics of networks of randomly connected excitatory and inhibitory spiking neurons. Journal of Physiology-Paris, 94(5), 445-463. [15] Boerlin, M., Machens, C. K., & Den`eve, S. (2013). Predictive coding of dynamical variables in balanced spiking networks. PLoS computational biology, 9(11), e1003258. [16] Bourdoukan, R., Barrett, D., Machens, C. K & Den`eve, S. (2012). Learning optimal spikebased representations. In Advances in neural information processing systems (pp. 2285-2293). [17] Vertechi, P., Brendel, W., & Machens, C. K. (2014). Unsupervised learning of an efficient short-term memory network. In Advances in Neural Information Processing Systems (pp. 36533661). [18] Watanabe, M., & Kano, M. (2011). Climbing fiber synapse elimination in cerebellar Purkinje cells. European Journal of Neuroscience, 34(10), 1697-1710. [19] Chen, C., Kano, M., Abeliovich, A., Chen, L., Bao, S., Kim, J. J., ... & Tonegawa, S. (1995). Impaired motor coordination correlates with persistent multiple climbing fiber innervation in PKC mutant mice. Cell, 83(7), 1233-1242. [20] Eccles, J. C., Llinas, R., & Sasaki, K. (1966). The excitatory synaptic action of climbing fibres on the Purkinje cells of the cerebellum. The Journal of Physiology, 182(2), 268-296. [21] Knudsen, E. I. (1994). Supervised Learning in the Brain. The Journal of Neuroscience 14(7), 39853997. [22] Thalmeier, D., Uhlmann, M., Kappen, H.J., & Memmesheimer, R. Learning universal computations with spikes. under review. 9 | 2015 | 98 |
5,992 | Online Learning for Adversaries with Memory: Price of Past Mistakes Oren Anava Technion Haifa, Israel oanava@tx.technion.ac.il Elad Hazan Princeton University New York, USA ehazan@cs.princeton.edu Shie Mannor Technion Haifa, Israel shie@ee.technion.ac.il Abstract The framework of online learning with memory naturally captures learning problems with temporal effects, and was previously studied for the experts setting. In this work we extend the notion of learning with memory to the general Online Convex Optimization (OCO) framework, and present two algorithms that attain low regret. The first algorithm applies to Lipschitz continuous loss functions, obtaining optimal regret bounds for both convex and strongly convex losses. The second algorithm attains the optimal regret bounds and applies more broadly to convex losses without requiring Lipschitz continuity, yet is more complicated to implement. We complement the theoretical results with two applications: statistical arbitrage in finance, and multi-step ahead prediction in statistics. 1 Introduction Online learning is a well-established learning paradigm which has both theoretical and practical appeals. The goal in this paradigm is to make a sequential decision, where at each trial the cost associated with previous prediction tasks is given. In recent years, online learning has been widely applied to several research fields including game theory, information theory, and optimization. We refer the reader to [1, 2, 3] for more comprehensive survey. One of the most well-studied frameworks of online learning is Online Convex Optimization (OCO). In this framework, an online player iteratively chooses a decision in a convex set, then a convex loss function is revealed, and the player suffers loss that is the convex function applied to the decision she chose. It is usually assumed that the loss functions are chosen arbitrarily, possibly by an allpowerful adversary. The performance of the online player is measured using the regret criterion, which compares the accumulated loss of the player with the accumulated loss of the best fixed decision in hindsight. The above notion of regret captures only memoryless adversaries who determine the loss based on the player’s current decision, and fails to cope with bounded-memory adversaries who determine the loss based on the player’s current and previous decisions. However, in many scenarios such as coding, compression, portfolio selection and more, the adversary is not completely memoryless and the previous decisions of the player affect her current loss. We are particularly concerned with scenarios in which the memory is relatively short-term and simple, in contrast to state-action models for which reinforcement learning models are more suitable [4]. 1 An important aspect of our work is that the memory is not used to relax the adaptiveness of the adversary (cf. [5, 6]), but rather to model the feedback received by the player. In particular, throughout this work we assume that the adversary is oblivious, that is, must determine the whole set of loss functions in advance. In addition, we assume a counterfactual feedback model: the player is aware of the loss she would suffer had she played any sequence of m decisions in the previous m rounds. This model is quite common in the online learning literature; see for instance [7, 8]. Our goal in this work is to extend the notion of learning with memory to one of the most general online learning frameworks - the OCO. To this end, we adapt the policy regret1 criterion of [5], and propose two different approaches for the extended framework, both attain the optimal bounds with respect to this criterion. 1.1 Summary of Results We present and analyze two algorithms for the framework of OCO with memory, both attain policy regret bounds that are optimal in the number of rounds. Our first algorithm utilizes the Lipschitz property of the loss functions, and — to the best of our knowledge — is the first algorithm for this framework that is not based on any blocking technique (this technique is detailed in the related work section below). This algorithm attains O(T 1/2)-policy regret for general convex loss functions and O(log T)-policy regret for strongly convex losses. For the case of convex and non-Lipschitz loss functions, our second algorithm attains the nearly optimal ˜O(T 1/2)-policy regret2; its downside is that it is randomized and more difficult to implement. A novel result that follows immediately from our analysis is that our second algorithm attains an expected ˜O(T 1/2)-regret, along with ˜O(T 1/2) decision switches in the standard OCO framework. Similar result currently exists only for the special case of the experts problem [9]. We note that the two algorithms we present are related in spirit (both designed to cope with bounded-memory adversaries), but differ in the techniques and analysis. Framework Previous bound Our first approach Our second approach Experts O(T 1/2) Not applicable ˜O(T 1/2) with Memory OCO with memory O(T 2/3) O(T 1/2) ˜O(T 1/2) (convex losses) OCO with Memory ˜O(T 1/3) O(log T) ˜O(T 1/2) (strongly convex losses) Table 1: State-of-the-art upper-bounds on the policy regret as a function of T (number of rounds) for the framework of OCO with memory. The best known bounds are due to the works of [9], [8], and [5], which are detailed in the related work section below. 1.2 Related Work The framework of OCO with memory was initially considered in [7] as an extension to the experts framework of [10]. Merhav et al. offered a blocking technique that guarantees a policy regret bound of O(T 2/3) against bounded-memory adversaries. Roughly speaking, the proposed technique divides the T rounds into T 2/3 equal-sized blocks, while employing a constant decision throughout each of these blocks. The small number of decision switches enables the learning in the extended framework, yet the constant block size results in a suboptimal policy regret bound. Later, [8] showed that a policy regret bound of O(T 1/2) can be achieved by simply adapting the Shrinking Dartboard (SD) algorithm of [9] to the framework considered in [7]. In short, the SD algorithm is aimed at ensuring an expected O(T 1/2) decision switches in addition to O(T 1/2)regret. These two properties together enable the learning in the considered framework, and the randomized block size yields an optimal policy regret bound. Note that in both [7] and [8], the 1The policy regret compares the performance of the online player with the best fixed sequence of actions in hindsight, and thus captures the notion of adversaries with memory. A formal definition appears in Section 2. 2The notation ˜O(·) is a variant of the O(·) notation that ignores logarithmic factors. 2 presented techniques are applicable only to the variant of the experts framework to adversaries with memory, and not to the general OCO framework. The framework of online learning against adversaries with memory was studied also in the setting of the adversarial multi-armed bandit problem. In this context, [5] showed how to convert an online learning algorithm with regret guarantee of O(T q) into an online learning algorithm that attains O(T 1/(2−q))-policy regret, also using a blocking technique. This approach is in fact a generalization of [7] to the bandit setting, yet the ideas presented are somewhat simpler. Despite the original presentation of [5] in the bandit setting, their ideas can be easily generalized to the framework of OCO with memory, yielding a policy regret bound of O(T 2/3) for convex losses and ˜O(T 1/3)-policy regret for strongly convex losses. An important concept that is captured by the framework of OCO with memory is switching costs, which can be seen as a special case where the memory is of length 1. This special case was studied in the works of [11], who studied the relationship between second order regret bounds and switching costs; and [12], who proved that the blocking algorithm of [5] is optimal for the setting of the adversarial multi-armed bandit with switching costs. 2 Preliminaries and Model We continue to formally define the notation for both the standard OCO framework and the framework of OCO with memory. For sake of readability, we shall use the notation gt for memoryless loss functions (that correspond to memoryless adversaries), and ft for loss functions with memory (that correspond to bounded-memory adversaries). 2.1 The Standard OCO Framework In the standard OCO framework, an online player iteratively chooses a decision xt ∈K, and suffers loss that is equal to gt(xt). The decision set K is assumed to be a bounded convex subset of Rn, and the loss functions {gt}T t=1 are assumed to be convex functions from K to [0, 1]. In addition, the set {gt}T t=1 is assumed to be chosen in advance, possibly by an all-powerful adversary that has full knowledge of our learning algorithm (see [1], for instance). The performance of the player is measured using the regret criterion, defined as follows: RT = T X t=1 gt(xt) −min x∈K T X t=1 gt(x), where T is a predefined integer denoting the total number of rounds played. The goal in this framework is to design efficient algorithms, whose regret grows sublinearly in T, corresponding to an average per-round regret going to zero as T increases. 2.2 The Framework of OCO with Memory In this work we consider the framework of OCO with memory, detailed as follows: at each round t, the online player chooses a decision xt ∈K ⊂Rn. Then, a loss function ft : Km+1 →R is revealed, and the player suffers loss of ft(xt−m, . . . , xt). For simplicity, we assume that 0 ∈K, and that ft(x0, . . . , xm) ∈[0, 1] for any x0, . . . , xm ∈K. Notice that the loss at round t depends on the previous m decisions of the player, as well as on his current one. We assume that after ft is revealed, the player is aware of the loss she would suffer had she played any sequence of decisions xt−m, . . . , xt (this corresponds to the counterfactual feedback model mentioned earlier). Our goal in this framework is to minimize the policy regret, as defined in [5]3: RT,m = T X t=m ft(xt−m, . . . , xt) −min x∈K T X t=m ft(x, . . . , x). We define the notion of convexity for the loss functions {ft}T t=1 as follows: we say that ft is a convex loss function with memory if ˜ft(x) = ft(x, . . . , x) is convex in x. From now on, we assume that 3The rounds in which t < m are ignored since we assume that the loss per round is bounded by a constant; this adds at most a constant to the final regret bound. 3 Algorithm 1 1: Input: learning rate η > 0, σ-strongly convex and smooth regularization function R(x). 2: Choose x0, . . . , xm ∈K arbitrarily. 3: for t = m to T do 4: Play xt and suffer loss ft(xt−m, . . . , xt). 5: Set xt+1 = arg minx∈K n η · Pt τ=m ˜fτ(x) + R(x) o 6: end for {ft}T t=1 are convex loss functions with memory. This assumption is necessary in some cases, if efficient algorithms are considered; otherwise, the optimization problem minx∈K PT t=m ft(x, . . . , x) might not be solvable efficiently. 3 Policy Regret for Lipschitz Continuous Loss Functions In this section we assume that the loss functions {ft}T t=1 are Lipschitz continuous for some Lipschitz constant L, that is |ft(x0, . . . , xm) −ft(y0, . . . , ym)| ≤L · ∥(x0, . . . , xm) −(y0, . . . , ym)∥, and adapt the well-known Regularized Follow The Leader (RFTL) algorithm to cope with boundedmemory adversaries. In the above and throughout the paper, we use ∥· ∥to denote the ℓ2-norm. Due to space constraints we present here only the algorithm and the main theorem, and defer the complete analysis to the supplementary material. Intuitively, Algorithm 1 relies on the fact that the corresponding functions { ˜ft}T t=1 are memoryless and convex. Thus, standard regret minimization techniques are applicable, yielding a regret bound of O(T 1/2) for { ˜ft}T t=1. This however, is not the policy regret bound we are interested in, but is in fact quite close if we use the Lipschitz property of {ft}T t=1 and set the learning rate properly. The algorithm requires the following standard definitions of R and λ (see supplementary material for more comprehensive background and exact norm definitions): λ = sup t∈{1,...,T },x,y∈K ∥∇˜ft(x)∥∗ y 2 and R = sup x,y∈K {R(x) −R(y)} . (1) Additionally, we denote by σ the strong convexity4 parameter of the regularization function R(x). For Algorithm 1 we can prove the following: Theorem 3.1. Let {ft}T t=1 be Lipschitz continuous loss functions with memory (from Km+1 to [0, 1]), and let R and λ be as defined in Equation (1). Then, Algorithm 1 generates an online sequence {xt}T t=1, for which the following holds: RT,m = T X t=m ft(xt−m, . . . , xt) −min x∈K T X t=m ft(x, . . . , x) ≤2Tλη(m + 1)3/2 + R η . Setting η = R1/2(TL)−1/2(m+1)−3/4(λ/σ)−1/4 yields RT,m ≤3(TRL)1/2(m+1)3/4(λ/σ)1/4. The following is an immediate corollary of Theorem 3.1 to H-strongly convex losses: Corollary 3.2. Let {ft}T t=1 be Lipschitz continuous and H-strongly convex loss functions with memory (from Km+1 to [0, 1]), and denote G = supt,x∈K ∥∇˜ft(x)∥. Then, Algorithm 1 generates an online sequence {xt}T t=1, for which the following holds: RT,m ≤2(m + 1)3/2G2 T X t=m ηt + T X t=m ∥xt −x∗∥2 1 ηt+1 −1 ηt −H . Setting ηt = 1 Ht yields RT,m ≤2(m+1)3/2G2 H (1 + log(T)). The proof simply requires plugging time-dependent learning rate in the proof of Theorem 3.1, and is thus omitted here. 4f(x) is σ-strongly convex if ∇2f(x) ⪰σIn×n for all x ∈K. We say that ft : Km+1 →R is σ-strongly convex loss function with memory if ˜ft(x) = ft(x, . . . , x) is σ-strongly convex in x. 4 Algorithm 2 1: Input: learning parameter η > 0. 2: Initialize w1(x) = 1 for all x ∈K, and choose x1 ∈K arbitrarily. 3: for t = 1 to T do 4: Play xt and suffer loss gt(xt). 5: Define weights wt+1(x) = e−α Pt τ=1 ˆgτ (x), where α = η 4G2 and ˆgt(x) = gt(x) + η 2∥x∥2. 6: Set xt+1 = xt with probability wt+1(xt) wt(xt) . 7: Otherwise, sample xt+1 from the density function pt+1(x) = wt+1(x) · R K wt+1(x)dx −1. 8: end for 4 Policy Regret with Low Switches In this section we present a different approach to the framework of OCO with memory — low switches. This approach was considered before in [8], who adapted the Shrinking Dartboard (SD) algorithm of [9] to cope with limited-delay coding. However, the authors in [9, 8] consider only the experts setting, in which the decision set is the simplex and the loss functions are linear. Here we adapt this approach to general decision sets and generally convex loss functions, and obtain optimal policy regret against bounded-memory adversaries. Due to space constraints, we present here only the algorithm and main theorem. The complete analysis appears in the supplementary material. Intuitively, Algorithm 2 defines a probability distribution over K at each round t. By sampling from this probability distribution one can generate an online sequence that has an expected low regret guarantee. This however is not sufficient in order to cope with bounded-memory adversaries, and thus an additional element of choosing xt+1 = xt with high probability is necessary (line 6). Our analysis shows that if this probability is equal to wt+1(xt) wt(xt) the regret guarantee remains, and we get an additional low switches guarantee. For Algorithm 2 we can prove the following: Theorem 4.1. Let {gt}T t=1 be convex functions from K to [0, 1], such that D = supx,y∈K ∥x −y∥ and G = supx,t ∥∇gt(x)∥, and define ˆgt(x) = gt(x) + η 2∥x∥2 for η = 2G D q 1+log(T +1) T . Then, Algorithm 2 generates an online sequence {xt}T t=1, for which it holds that E [RT ] = O p T log(T) and E [S] = O p T log(T) , where S is the number of decision switches in the sequence {xt}T t=1. The exact bounds for E [RT ] and E [S] are given in the supplementary material. Notice that Algorithm 2 applies to memoryless loss functions, yet its low switches guarantee implies learning against bounded-memory adversaries as stated and proven in Lemma C.5 (see supplementary material). 5 Application to Statistical Arbitrage Our first application is motivated by financial models that are aimed at creating statistical arbitrage opportunities. In the literature, “statistical arbitrage” refers to statistical mispricing of one or more assets based on their expected value. One of the most common trading strategies, known as “pairs trading”, seeks to create a mean reverting portfolio using two assets with same sectorial belonging (typically using both long and short sales). Then, by buying this portfolio below its mean and selling it above, one can have an expected positive profit with low risk. Here we extend the traditional pairs trading strategy, and present an approach that aims at constructing a mean reverting portfolio from an arbitrary (yet known in advance) number of assets. Roughly speaking, our goal is to synthetically create a mean reverting portfolio by maintaining weights upon n different assets. The main problem arises in this context is how do we quantify the amount of mean reversion of a given portfolio? Indeed, mean reversion is somewhat an ill-defined concept, and thus 5 different proxies are usually defined to capture its notion. We refer the reader to [13, 14, 15], in which few of these proxies (such as predictability and zero-crossing) are presented. In this work, we consider a proxy that is aimed at preserving the mean price of the constructed portfolio (over the last m trading periods) close to zero, while maximizing its variance. We note that due to the very nature of the problem (weights of one trading period affect future performance), the memory comes unavoidably into the picture. We proceed to formally define the new mean reversion proxy and the use of our new algorithm in this model. Thus, denote by yt ∈Rn the prices of n assets at time t, and by xt ∈Rn a distribution of weights over these assets. Since short selling is allowed, the norm of xt can sum up to an arbitrary number, determined by the loan flexibility. Without loss of generality we assume that ∥xt∥2 = 1, which is also assumed in the works of [14, 15]. Note that since xt determines the proportion of wealth to be invested in each asset and not the actual wealth it self, any other constant would work as well. Consequently, define: ft(xt−m, . . . , xt) = m X i=0 x⊤ t−iyt−i !2 −λ · m X i=0 x⊤ t−iyt−i 2 , (2) for some λ > 0. Notice that minimizing ft iteratively yields a process {x⊤ t yt}T t=1 such that its mean is close to zero (due to the expression on the left), and its variance is maximized (due to the expression on the right). We use the regret criterion to measure our performance against the best distribution of weights in hindsight, and wish to generate a series of weights {xt}T t=1 such that the regret is sublinear. Thus, define the memoryless loss function ˜ft(x) = ft(x, . . . , x) and denote At = m−1 X i=0 m−1 X j=0 yt−iy⊤ t−j and Bt = λ · m−1 X i=0 yt−iy⊤ t−i ! . Notice we can write ˜ft(x) = x⊤Atx −x⊤Btx. Since ˜ft is not convex in general, our techniques are not straightforwardly applicable here. However, the hidden convexity of the problem allows us to bypass this issue by a simple and tight Positive Semi-Definite (PSD) relaxation. Define ht(X) = X ◦At −X ◦Bt, (3) where X is a PSD matrix with Tr(X) = 1, and X ◦A is defined as Pn i=1 Pn j=1 X(i, j) · A(i, j). Now, notice that the problem of minimizing PT t=m ht(X) is a PSD relaxation to the minimization problem PT t=m ˜ft(x), and for the optimal solution it holds that: min X T X t=m ht(X) ≤ T X t=m ht(x∗x∗⊤) = T X t=m ˜ft(x∗). where x∗= arg minx∈K PT t=m ˜ft(x). Also, we can recover a vector x from the PSD matrix X using an eigenvector decomposition as follows: represent X = Pn i=1 λiviv⊤ i , where each vi is a unit vector and λi are non-negative coefficients such that Pn i=1 λi = 1. Then, by sampling the eigenvector x = vi with probability λi, we get that E ˜ft(x) = ht(X). Technically, this decomposition is possible due to the fact that X is a PSD matrix with Tr(X) = 1. Notice that ht is linear in X, and thus we can apply regret minimization techniques on the loss functions {ht}T t=1. This procedure is formally given in Algorithm 3. For this algorithm we can prove the following: Corollary 5.1. Let {ft}T t=1 be as defined in Equation (2), and {ht}T t=1 be the corresponding memoryless functions, as defined in Equation (3). Then, applying Algorithm 2 to the loss functions {ht}T t=1 yields an online sequence {Xt}T t=1, for which the following holds: T X t=1 E [ht(Xt)] − min X⪰0 Tr(X)=1 T X t=1 ht(X) = O p T log(T) . Sampling xt ∼Xt using the eigenvector decomposition described above yields: E [RT,m] = T X t=m E [ft(xt−m, . . . , xt)] −min ∥x∥=1 T X t=m ft(x, . . . , x) = O p T log(T) . 6 Algorithm 3 Online Statistical Arbitrage (OSA) 1: Input: Learning rate η, memory parameter m, regularizer λ. 2: Initialize X1 = 1 nIn×n. 3: for t = 1 to T do 4: Randomize xt ∼Xt using the eigenvector decomposition. 5: Observe ft and define ht as in equation (3). 6: Apply Algorithm 2 to ht(Xt) to get Xt+1. 7: end for Remark: We assume here that the prices of the n assets at round t are bounded for all t by a constant which is independent of T. The main novelty of our approach to the task of constructing mean reverting portfolios is the ability to maintain the weight distributions online. This is in contrast to the traditional offline approaches that require a training period (to learn a weight distribution), and a trading period (to apply a corresponding trading strategy). 6 Application to Multi-Step Ahead Prediction Our second application is motivated by statistical models for time series prediction, and in particular by statistical models for multi-step ahead AR prediction. Thus, let {Xt}T t=1 be a time series (that is, a series of signal observations). The traditional AR (short for autoregressive) model, parameterized by lag p and coefficient vector α ∈Rp, assumes that each observation complies with Xt = p X k=1 αkXt−k + ϵt, where {ϵt}t∈Z is white noise. In words, the model assumes that Xt is a noisy linear combination of the previous p observations. Sometimes, an additional additive term α0 is included to indicate drift, but we ignore this for simplicity. The online setting for time series prediction is well-established by now, and appears in the works of [16, 17]. Here, we adapt this setting to the task of multi-step ahead AR prediction as follows: at round t, the online player has to predict Xt+m, while at her disposal are all the previous observations X1, . . . , Xt−1 (the parameter m determines the number of steps ahead). Then, Xt is revealed and she suffers loss of ft(Xt, ˜Xt), where ˜Xt denotes her prediction for Xt. For simplicity, we consider the squared loss to be our error measure, that is, ft(Xt, ˜Xt) = (Xt −˜Xt)2. In the statistical literature, a common approach to the problem of multi-step ahead prediction is to consider 1-step ahead recursive AR predictors [18, 19]: essentially, this approach makes use of standard methods (e.g., maximum likelihood or least squares estimation) to extract the 1-step ahead estimator. For instance, a least squares estimator for α at round t would be: αLS = arg min α (t−1 X τ=1 Xτ −˜XAR τ (α) 2 ) = arg min α t−1 X τ=1 Xτ − p X k=1 αkXτ−k !2 . Then, αLS is used to generate a prediction for Xt: ˜XAR t (αLS) = Pp i=1 αLS i Xt−i, which is in turn used as a proxy for it in order to predict the value of Xt+1: ˜XAR t+1(αLS) = αLS 1 ˜XAR t (αLS) + p X k=2 αLS k Xt−k+1. (4) The values of Xt+2, . . . , Xt+m are predicted in the same recursive manner. The most obvious drawback of this approach is that not much can be said on the quality of this predictor even if the AR model is well-specified, let alone if it is not (see [18] for further discussion on this issue). In light of this, the motivation to formulate the problem of multi-step ahead prediction in the online setting is quite clear: attaining regret in this setting would imply that our algorithm’s performance 7 Algorithm 4 Adaptation of Algorithm 1 to Multi-Step Ahead Prediction 1: Input: learning rate η, regularization function R(x), signal {Xt}T t=1. 2: Choose w0, . . . , wm ∈KIP arbitrarily. 3: for t = m to T do 4: Predict ˜XIP t (wt−m) = Pp k=1 wt−m k Xt−m−k and suffer loss Xt −˜XIP t (wt−m) 2. 5: Set wt+1 = arg minw∈KIP n η Pt τ=m Xτ −˜XIP τ (w) 2 + ∥w∥2 2 o 6: end for is comparable with the best 1-step ahead recursive AR predictor in hindsight (even if the latter is misspecified). Thus, our goal is to minimize the following regret term: RT = T X t=1 Xt −˜Xt 2 −min α∈K T X t=1 Xt −˜XAR t (α) 2, where K denotes the set of all 1-step ahead recursive AR predictors, against which we want to compete. Note that since the feedback is delayed (the AR coefficients chosen at round t−m are used to generate the prediction at round t), the memory comes unavoidably into the picture. Nevertheless, here also both of our techniques are not straightforwardly applicable due the non-convex structure of the problem: each prediction ˜XAR t (α) contains products of α coefficients that cause the losses to be non-convex in α. To circumvent this issue, we use non-proper learning techniques, and let our predictions to be of the form ˜XNP t+m(w) = Pp k=1 wkXt−k for a properly chosen set KNP ⊂Rp of the w coefficients. Basically, the idea is to show that (a) attaining regret bound with respect to the best predictor in the new family can be done using the techniques we present in this work; and (b) the best predictor in the new family is better than the best 1-step ahead recursive AR predictor. This would imply a regret bound with respect to best 1-step ahead recursive AR predictor in hindsight. Our formal result is given in the following corollary: Corollary 6.1. Let D = supw1,w2∈KIP ∥w1 −w2∥2 and G = supw,t ∥∇ft(Xt, ˜Xt(w))∥2. Then, Algorithm 4 generates an online sequence {wt}T t=1, for which it holds that T X t=1 Xt −˜XIP t (wt−m) 2 −min α∈K T X t=1 Xt −˜XAR t (α) 2 ≤3GD √ Tm. Remark: The tighter bound in m (m1/2 instead of m3/4) follows directly by modifying the proof of theorem 3.1 to this setting (ft is affected only by wt−m and not by wt−m, . . . , wt). In the above, the values of D and G are determined by the choice of the set K. For instance, if we want to compete against the best α ∈K = [−1, 1]p we need to use the restriction wk ≤2m for all k. In this case, D ≈2m and G ≈1. If we consider K to be the set of all α ∈Rp such that αk ≤(1/ √ 2)k, we get that D ≈√m and G ≈1. The main novelty of our approach to the task of multi-step ahead prediction is the elimination of generative assumptions on the data, that is, we allow the time series to be arbitrarily generated. Such assumptions are common in the statistical literature, and needed in general to extract ML estimators. 7 Discussion and Conclusion In this work we extended the notion of online learning with memory to capture the general OCO framework, and proposed two algorithms with tight regret guarantees. We then applied our algorithms to two extensively studied problems: construction of mean reverting portfolios, and multistep ahead prediction. It remains for future work to further investigate the performance of our algorithms in these problems and other problems in which the memory naturally arises. Acknowledgments This work has been supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement 306638 (SUPREL). 8 References [1] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006. [2] Elad Hazan. The convex optimization approach to regret minimization. Optimization for machine learning, page 287, 2011. [3] Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107–194, 2012. [4] M.L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley Series in Probability and Statistics. Wiley, 1994. [5] Raman Arora, Ofer Dekel, and Ambuj Tewari. Online bandit learning against an adaptive adversary: from regret to policy regret. 2012. [6] Nicol`o Cesa-Bianchi, Ofer Dekel, and Ohad Shamir. Online learning with switching costs and other adaptive adversaries. CoRR, abs/1302.4387, 2013. [7] Neri Merhav, Erik Ordentlich, Gadiel Seroussi, and Marcelo J. Weinberger. On sequential strategies for loss functions with memory. IEEE Transactions on Information Theory, 48(7):1947–1958, 2002. [8] Andr´as Gy¨orgy and Gergely Neu. Near-optimal rates for limited-delay universal lossy source coding. In ISIT, pages 2218–2222, 2011. [9] Sascha Geulen, Berthold V¨ocking, and Melanie Winkler. Regret minimization for online buffering problems using the weighted majority algorithm. In COLT, pages 132–143, 2010. [10] Nick Littlestone and Manfred K. Warmuth. The weighted majority algorithm. In FOCS, pages 256–261, 1989. [11] Eyal Gofer. Higher-order regret bounds with switching costs. In Proceedings of The 27th Conference on Learning Theory, pages 210–243, 2014. [12] Ofer Dekel, Jian Ding, Tomer Koren, and Yuval Peres. Bandits with switching costs: Tˆ{2/3} regret. arXiv preprint arXiv:1310.2997, 2013. [13] Anatoly B. Schmidt. Financial Markets and Trading: An Introduction to Market Microstructure and Trading Strategies (Wiley Finance). Wiley, 1 edition, August 2011. [14] Alexandre D’Aspremont. Identifying small mean-reverting portfolios. Quant. Finance, 11(3):351–364, 2011. [15] Marco Cuturi and Alexandre D’aspremont. Mean reversion with a variance threshold. 28(3):271–279, May 2013. [16] Oren Anava, Elad Hazan, Shie Mannor, and Ohad Shamir. Online learning for time series prediction. arXiv preprint arXiv:1302.6927, 2013. [17] Oren Anava, Elad Hazan, and Assaf Zeevi. Online time series prediction with missing data. In ICML, 2015. [18] Michael P Clements and David F Hendry. Multi-step estimation for forecasting. Oxford Bulletin of Economics and Statistics, 58(4):657–684, 1996. [19] Massimiliano Marcellino, James H Stock, and Mark W Watson. A comparison of direct and iterated multistep ar methods for forecasting macroeconomic time series. Journal of Econometrics, 135(1):499– 526, 2006. [20] G.S. Maddala and I.M. Kim. Unit Roots, Cointegration, and Structural Change. Themes in Modern Econometrics. Cambridge University Press, 1998. [21] Soren Johansen. Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. Econometrica, 59(6):1551–80, November 1991. [22] Jakub W Jurek and Halla Yang. Dynamic portfolio selection in arbitrage. In EFA 2006 Meetings Paper, 2007. [23] Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169–192, 2007. [24] L´aszl´o Lov´asz and Santosh Vempala. Logconcave functions: Geometry and efficient sampling algorithms. In FOCS, pages 640–649. IEEE Computer Society, 2003. [25] Hariharan Narayanan and Alexander Rakhlin. Random walk approach to regret minimization. In John D. Lafferty, Christopher K. I. Williams, John Shawe-Taylor, Richard S. Zemel, and Aron Culotta, editors, NIPS, pages 1777–1785. Curran Associates, Inc., 2010. 9 | 2015 | 99 |
5,993 | Eliciting Categorical Data for Optimal Aggregation Chien-Ju Ho Cornell University ch624@cornell.edu Rafael Frongillo CU Boulder raf@colorado.edu Yiling Chen Harvard University yiling@seas.harvard.edu Abstract Models for collecting and aggregating categorical data on crowdsourcing platforms typically fall into two broad categories: those assuming agents honest and consistent but with heterogeneous error rates, and those assuming agents strategic and seek to maximize their expected reward. The former often leads to tractable aggregation of elicited data, while the latter usually focuses on optimal elicitation and does not consider aggregation. In this paper, we develop a Bayesian model, wherein agents have differing quality of information, but also respond to incentives. Our model generalizes both categories and enables the joint exploration of optimal elicitation and aggregation. This model enables our exploration, both analytically and experimentally, of optimal aggregation of categorical data and optimal multiple-choice interface design. 1 Introduction We study the general problem of eliciting and aggregating information for categorical questions. For example, when posing a classification task to crowd workers who may have heterogeneous skills or amount of information about the underlying true label, the principle wants to elicit workers’ private information and aggregate it in a way to maximize the probability that the aggregated information correctly predicts the underlying true label. Ideally, in order to maximize the probability of correctly predicting the ground truth, the principal would want to elicit agents’ full information by asking agents for their entire belief in the form of a probability distribution over labels. However, this is not always practical, e.g., agents might not be able to accurately differentiate 92% and 93%. In practice, the principal is often constrained to elicit agents’ information via a multiple-choice interface, which discretizes agents’ continuous belief into finite partitions. An example of such an interface is illustrated in Figure 1. Moreover, disregard of whether full or partial information about agents’ beliefs is elicited, aggregating the information into a single belief or answer is often done in an ad hoc fashion (e.g. majority voting for simple multiple-choice questions). What’s the texture shown in the image? Figure 1: An example of the task interface. In this work, we explore the joint problem of eliciting and aggregating information for categorical data, with a particular focus on how to design the multiple-choice interface, i.e. how to discretize agents’ belief space to form discrete choices. The goal is to maximize the probability of correctly predicting the ground truth while incentivizing agents to truthfully report their beliefs. This problem is challenging. Changing the interface not only changes which agent beliefs lead to which responses, but also influences how to optimally aggregate these responses into a single label. Note that we focus on the abstract level of interface design. We explore 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. the problem of how to partition agents’ belief spaces for optimal aggregations. We do not discuss other behavioral aspects of interface design, such as question framing, layouts, etc We propose a Bayesian framework, which allows us to achieve our goal in three interleaving steps. First, we constrain our attention to interfaces which admit economically robust payment functions, that is, where agents seeking to maximize their expected payment select the answer that corresponds to their belief. Second, given an interface, we develop a principled way of aggregating information elicited through it, to obtain the maximum a posteriori (MAP) estimator. Third, given the constraints on interfaces (e.g., only binary choice question is allowed) and aggregation methods, we can then choose the optimal interface, which leads to the highest prediction accuracy after both elicitation and aggregation. (Note that if there are no constraints, eliciting full information is always optimal.) Using theoretical analysis, simulations, and experiments, we provide answers to several interesting questions. Our main results are summarized as follows: • If the principal can elicit agents’ entire belief distributions, our framework can achieve optimal aggregation, in the sense that the principal can make predictions as if she has observed the private information of all agents (Section 4.1). This resolves the open problem of optimal aggregation for categorical data that was considered impossible to achieve in [1]. • For the binary-choice interface design question, we explore the design of optimal interfaces for small and large numbers of agents (Section 4.2). We conduct human-subject experiments on Amazon’s Mechanical Turk and demonstrate that our optimal binary-choice interface leads to better prediction accuracy than a natural baseline interface (Section 5.3). • Our framework gives a simple principled way of aggregating data from arbitrary interfaces (Section 5.1). Applied to experimental data from [2] for a particular multiple-choice interface, our aggregation method has better prediction accuracy than their majority voting (Section 5.2). • For general multiple-choice interfaces, we use synthetic experiments to obtain qualitative insights of the optimal interface. Moreover, our simple (heuristic) aggregation method performs nearly optimally, demonstrating the robustness of our framework (Section 5.1). 1.1 Related Work Eliciting private information from strategic agents has been a central question in economics and other related domains. The focus here is often on designing payment rules such that agents are incentivized to truthfully report their information. In this direction, proper scoring rules [3, 4, 5] have long been used for eliciting beliefs about categorical and continuous variables. When the realized value of a random variable will be observed, proper scoring rules have been designed for eliciting either the complete subjective probability distributions of the random variable [3, 4] or some statistical properties of these distributions [6, 7]. When the realized value of a random variable will not be available, a class of peer prediction mechanisms [8, 9] has been designed for truthful elicitation. These mechanisms often use proper scoring rules and leverage on the stochastic relationship of agents’ private information about the random variable in a Bayesian setting. However, work in this direction often takes elicitation as an end goal and doesn’t offer insights on how to aggregate the elicited information. Another theme in the existing literature is the development of statistical inference and probabilistic modeling methods for the purpose of aggregating agents’ inputs. Assuming a batch of noisy inputs, the EM algorithm [10] can be adopted to learn the skill level of agents and obtain estimates of the best answer [11, 12, 13, 14, 15]. Recently, extensions have been made to also consider task assignment and online task assignment in the context of these probabilistic models of agents [16, 17, 18, 19]. Work under this theme often assumes non-strategic agents who have some error rate and are rewarded with a fixed payment that doesn’t depend on their reports. This paper attempts to achieve both truthful elicitation and principled aggregation of information with strategic agents. The closest work to our paper is [1], which has the same general goal and uses a similar Bayesian model of information. That work achieves optimal aggregation by associating the confidence of an agent’s prediction with hyperparameters of a conjugate prior distribution. However, this approach leaves optimal aggregation for categorical data as an open question, which we resolve. Moreover, our model allows us to elicit confidence about an answer over a coarsened report space (e.g. a partition of the probability simplex) and to reason about optimal coarsening for the purpose 2 of aggregation. In comparison, [2] also elicit quantified confidence on reported labels in their mechanism. Their mechanism is designed to incentivize agents to truthfully report the label that they believe to be correct when their confidence on the report is above a threshold and skip the question when it’s below the threshold. Majority voting is then used to aggregate the reported labels. These thresholds provide a coarsened report space for eliciting confidence, and thus are well modeled by our approach. However, in that work the thresholds are given a priori, and moreover, the elicited confidence is not used in aggregation. These are both holes which our approach fill; in Section 5, we demonstrate how to derive optimal thresholds, and aggregation policies, which depend critically on the prior distribution and the number of agents. 2 Bayesian Model In our model, the principal would like to get information about a categorical question (e.g., predicting who will win the presidential election, or identifying whether there is a cancer cell in a picture of cells) from m agents. Each question has a finite number of possible answers X, |X| = k. The ground truth (correct answer) ⇥is drawn from a prior distribution p(✓), with realized value ✓2 X. This prior distribution is common knowledge to the principal and the agents. We use ✓⇤to denote the unknown, realized ground truth. Agents have heterogeneous levels of knowledge or abilities on the question that are unknown to the principal. To model agents’ abilities, we assume each agent has observed independent noisy samples related to the ground truth. Hence, each agent’s ability can be expressed as the number of noisy samples she has observed. The number of samples observed can be different across agents and is unknown to the principal. Formally, given the ground truth ✓⇤, each noisy sample X, with x 2 X, is i.i.d. drawn according to the distribution p(x|✓⇤). 1 In this paper, we focus our discussion on the symmetric noise distribution, defined as p(x|✓) = (1 −✏)1{✓= x} + ✏· 1/k. This noise distribution is common knowledge to the principal and the agents. While the symmetric noise distribution may appear restrictive, it is indeed quite general. In Appendix C, we discuss how our model covers many scenarios considered in the literature as special cases. Beliefs of Agents. If an agent has observed n noisy samples, X1 = x1, . . . , Xn = xn, her belief is determined by a count vector ~c = {c✓: ✓2 X} where c✓= Pn i=1 1{xi = ✓} is the number of sample ✓that the agent has observed. According to Bayes’ rule, we write her posterior belief on ⇥ as p(✓|x1, . . . , xn), which can be expressed as p(✓|x1, . . . , xn) = Qn j=1 p(xj|✓)p(✓) p(x1, . . . , xn) = ↵c✓β n−c✓p(✓) P ✓02X ↵c✓0 β n−c✓0 p(✓0) , where ↵= 1 −✏+ ✏/k and β = ✏/k. In addition to the posterior on ⇥, the agent also has an updated belief, called the posterior predictive distribution (PPD), about an independent sample X given observed samples X1 = x1, . . . , Xn = xn. The PPD can be considered as a noisy version of the posterior: p(x|x1, . . . , xn) = ✏ k + (1 −✏)p(⇥=x|x1, . . . , xn) In fact, in our setting the PPD and posterior are in one-to-one correspondence, so while our theoretical results focus on the PPD, our experiments will consider the posterior without loss of generality. Interface. An interface defines the space of reports the principal can elicit from agents. The reports elicited via the interface naturally partition agents’ beliefs, a k-dimensional probability simplex, into a (potentially infinite) number of cells, which each correspond to a coarsened version of agents’ PPD. Formally, each interface consists of a report space R and a partition D = {Dr ✓∆k}r2R, with each cell Dr corresponding to a report r and S r2R Dr = ∆k.2 In this paper, we sometime use only R or D to represent an interface. 1When there is no ambiguity, we use p(x|✓⇤) to represent p(X = x|⇥= ✓⇤) and similar notations for other distributions. 2Strictly speaking, we will allow cells to overlap on their boundary; see Section 3 for more discussion. 3 In this paper, we focus on the abstract level of the interface design. We explore the problem of how to partition agents’ belief spaces for optimal aggregations. We do not discuss other aspects of interface design, such as question framing, layouts, etc. In practice there are often pre-specified constraints on the design of interfaces, e.g., the principal can only ask agents a multiple-choice question with no more than 2 choices. We explore how to optimal design interfaces with given constraints. Objective. The goal of the principal is to choose an interface corresponding to a partition D, satisfying some constraints, and an aggregation method AggD, to maximize the probability of correctly predicting the ground truth. One very important constraint is that there should exist a payment method for which agents are correctly incentivized to report r if their belief is in Dr; see Section 3. We can formulate the goal as the following optimization problem, max (R,D)2Interfaces max AggD Pr[AggD(R1, . . . , Rm) = ⇥] , (1) where Ri are random variables representing the reports chosen by agents after ✓⇤and the samples are drawn. 3 Our Mechanism We assume the principal has access to a single independent noisy sample X drawn from p(x|✓⇤). The principal can then leverage this sample to elicit and aggregate agents’ beliefs by adopting techniques in proper scoring rules [3, 5]. This assumption can be satisfied by, for example, allowing the principal to ask for an additional opinion outside of the m agents, or by asking agents multiple questions and only scoring a small random subset for which answers can be obtained separately (often, on the so-called “gold standard set”). Our mechanism can be described as follows. The principal chooses an interface with report space R and partition D, and a scoring rule S(r, x) for r 2 R and x 2 X. The principal then requests a report ri 2 R for each agent i 2 {1, . . . , m}, and observes her own sample X = x. She then gives a score of S(ri, x) to agent i and aggregates the reports via a function AggD : R⇥· · ·⇥R ! X. Agents are assumed to be rational and aim to maximize their expected scores. In particular, if an agent i believes X is drawn from some distribution p, she will choose to report ri 2 argmaxr2R EX⇠p[S(r, X)]. Elicitation. To elicit truthful reports from agents, we adopt techniques from proper scoring rules [3, 5]. A scoring rule is strictly proper if reporting one’s true belief uniquely maximizes the expected score. For example, a strictly proper score is the logarithmic scoring rule, S(p, x) = log p(x), where p(x) is the agent’s belief of the distribution x is drawn from. In our setting, we utilize the requester’s additional sample p(x|✓⇤) to elicit agents’ PPDs p(x|x1, . . . , xn). If the report space R = ∆k, we can simply use any strictly proper scoring rules, such as the logarithmic scoring rule, to elicit truthful reports. If the set of report space R is finite, we must specify what it means to be truthful. The partition D defined in the interface is a way of codifying this relationship: a scoring rule is truthful with respect to a partition if report r is optimal whenever an agent’s belief lies in cell Dr.3 Definition 1. S(r, x) is truthful with respect to D if for all r 2 R and all p 2 ∆k we have p 2 Dr () 8r0 6= r EpS(r, X) ≥EpS(r0, X) . Several natural questions arise from this definition. For which partitions D can we devise such truthful scores? And if we have such a partition, what are all the scores which are truthful for it? As it happens, these questions have been answered in the field of property elicitation [20, 21], with the verdict that there exist truthful scores for D if and only if D forms a power diagram, a type of weighted Voronoi diagram [22]. Thus, when we consider the problem of designing the interface for a crowdsourcing task, if we want to have robust economic incentives, we must confine ourselves to interfaces which induce power 3As mentioned above, strictly speaking, the cells {Dr}r2R do not form a partition because their boundaries overlap. This is necessary: for any (nontrivial) finite-report mechanism, there exist distributions for which the agent is indifferent between two or more reports. Fortunately, the set of all such distributions has Lebesgue measure 0 in the simplex, so these boundaries do not affect our analysis. 4 diagrams on the set of agent beliefs. In this paper, we focus on two classes of power diagrams: threshold partitions, where the membership p 2 Dr can be decided by comparisons of the form t1 p✓t2, and shadow partitions, where p 2 Dr () r = argmaxx p(x) −p⇤(x) for some reference distribution p⇤. Threshold partitions cover those from [2], and shadow partitions are inspired by the Shadowing Method from peer prediction [23]. Aggregation. The goal of the principal is to aggregate the agents’ reports into a single prediction which maximizes the probability of correctly predicting the ground truth. More formally, let us assume that the principal obtains reports r1, . . . , rm from m agents such that the belief pi of agent i lies in Di := Dri. In order to maximize the probability of correct predictions, the principal aggregates the reports by calculating the posterior p(✓|D1, . . . , Dm) for all ✓and making the prediction ˆ✓that maximizes the posterior. ˆ✓= argmax ✓ p(✓|D1, . . . , Dm) = argmax ✓ m Y i=1 p(Di|✓) ! p(✓) , where p(Di|✓) is the probability that the PPD of agent i falls within Di giving the ground truth to be ✓. To calculate p(D|✓), we assume agents’ abilities, represented by the number of samples, are drawn from a distribution p(n). We assume p(n) is known to the principal. This assumption can be satisfied if the principal is familiar with the market and has knowledge of agents’ skill distribution. Empirically, in our simulation, the optimal interface is robust to the choice of this distribution. p(D|✓) = X n 0 @ X x1..xn:p(✓|x1..xn)2D p(x1..xn|✓) 1 A p(n) = X n 0 @ X ~c:p(✓|~c)2D ✓n ~c ◆ ↵c✓βn−c✓ 1 A p(n) Z(n), with Z(n) = P ~c .n ~c / ↵c1βn−c1 and .n ~c / = n!/(Q i ci!), where ci is the i-th component of ~c. Interface Design. Let P(D) be the probability of correctly predicting the ground truth given partition D, assuming the best possible aggregation policy. The expectation is taken over which cell Di 2 D agent i reports for m agents. P(D) = X D1,...,Dm max ✓ p(✓|D1, . . . , Dm)p(D1, . . . , Dm) = X D1,...,Dm max ✓ m Y i=1 p(Di|✓) ! p(✓) . The optimal interface design problem is to find an interface with partition D within the set of feasible interfaces such that in expectation, P(D) is maximized. 4 Theoretical Analysis In this section, we analyze two settings to illustrate what our mechanism can achieve. We first consider the setting in which the principal can elicit full belief distributions from agents. We show that our mechanism can obtain optimal aggregation, in the sense that the principal can make prediction as if she has observed all the private signals observed by all workers. In the second setting, we consider a common setting with binary signals and binary cells (e.g., binary classification tasks with two-option interface). We demonstrate how to choose the optimal interface when we aim to collect data from one single agent and when we aim to collect data from a large number of agents. 4.1 Collecting Full Distribution Consider the setting in which the allowed reports are full distributions over labels. We show that in this setting, the principal can achieve optimal aggregation. Formally, the interface consists of a report space R = ∆k ⇢[0, 1]k, which is the k-dimensional probability simplex, corresponding to beliefs about the principal’s sample X given the observed samples of an agent. The aggregation is optimal if the principal can obtain global PPD. Definition 2 ([1]). Let S be the set of all samples observed by agents. Given the prior p(✓) and data S distributed among the agents, the global PPD is given by p(x|S). 5 In general, as noted in [1], computing the global PPD requires access to agents’ actual samples, or at least their counts, whereas the principal can at most elicit the PPD. In that work, it is therefore considered impossible for the principal to leverage a single sample to obtain the global PPD for a categorical question, as there does not exist a unique mapping from PPDs to sample counts. While our setting differs from that paper, we intuitively resolve this impossibility by finding a non-trivial unique mapping between the differences of sample counts and PPDs. Lemma 1. Fix ✓0 2 X and let di↵i 2 Zk−1 be the vector di↵i ✓= ci ✓0 −ci ✓encoding the differences in the number of samples of ✓and ✓0 that agent i has observed. There exists an unique mapping between di↵i and the PPD of agent i. With Lemma 1 in hand, assuming the principal can obtain the full PPD from each agent, she can now compute the global PPD: she simply converts each agents’ PPD into a sample count difference, sums these differences, and finally converts the total differences into the global PPD. Theorem 2. Given the PPDs of all agents, the principal can obtain the global PPD. 4.2 Interface Design in Binary Settings To gain the intuition about optimal interface design, we examine a simple setting with binary signal X = {0, 1} and a partitions with only two cells. To simplify the discussion, we also assume all agents have observed exactly n samples. In this setting, each partition can be determined by a single parameter, the threshold pT ; its cells indicate whether the agent believes the probability of the principal’s sample X to be 0 is larger than pT or not. Note that we can also write the threshold as T, the number of samples that the agent observes to be signal 0. Membership in the two cells indicates whether or not the agents observes more than T samples with signal 0. We first give the result when there is only one agent. 4 Lemma 3. In the binary-signal and two-cell setting, if the number of agents is one, the optimal partition has threshold pT ⇤= 1/2. If the number of agents is large, we numerically solve for the optimal partition with a wide range of parameters. We find that the optimal partition is to set the threshold such that agents’ posterior belief on the ground truth is the same as the prior. This is equivalent to asking agents whether they observe more samples with signal 0 or with signal 1. Please see Appendix B and H for more discussion. The above arguments suggest that when the principal plans to collect data from multiple agents for datasets with asymmetric priors (e.g., identifying anomaly images from a big dataset), adopting our interface would lead to better aggregation than traditional interface do. We have evaluated this claim in real-world experiments in Section 5.3. 5 Experiments To confirm our theoretical results and test our model, we turn to experimental results. In our synthetic experiments, we simply explore what the model tells us about optimal partitions and how they behave as a function of the model, giving us qualitative insights into interface design. We also introduce a heuristic aggregation method, which allows our results to be easily applied in practice. In addition to validating our heuristics numerically, we show that they lead to real improvements over simple majority voting by re-aggregating some data from previous work [2]. Finally, we perform our own experiments for a binary signal task and show that the optimal mechanism under the model, coupled with heuristic aggregation, significantly outperforms the baseline. 5.1 Synthetic Experiments From our theoretical results, we expect that in the binary setting, the boundary of the optimal partition should be roughly uniform for small numbers of agents and quickly approach the prior as the number of agents per task increases. In the Appendix, we confirm this numerically. Figure 2 extends this intuition to the 3-signal case, where the optimal reference point p⇤for a shadow partition closely tracks the prior. Figure 2 also gives insight into the design of threshold partitions, showing 4Our result can be generalized to k signals and one agent. See Lemma 4 in Appendix G. 6 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 Figure 2: Optimal interfaces as a function of the model; the prior is shown in each as a red dot. Each triangle represents the probability simplex on three signals (0,1,2), and the cells (set of posteriors) of the partition defined by the interface are delineated by dashed lines. Top: the optimal shadow partition for three agents. Here the reference distribution p⇤is close to the prior, but often slightly toward uniform as suggested by the behavior in the binary case (Section 4.2); for larger numbers of agents this point in fact matches the prior always. Bottom: the optimal threshold partition for increasing values of ✏. Here as one would expect, the more uncertainty agents have about the true label, the lower the thresholds should be. Figure 3: Prediction error according to our model as a function of the prior for (a) the optimal partition with optimal aggregation, (b) the optimal partition with heuristic aggregation, and (c) the na¨ıve partition and aggregation. As we see, the heuristics are nearly optimal, and yield significantly lower error than the baseline. that the threshold values should decrease as agent uncertainty increases. The Appendix gives other qualitative findings. The optimal partitions and aggregation policies suggested by our framework are often quite complicated. Thus, to be practical, one would like simple partitions and aggregation methods which perform nearly optimally under our framework. Here we suggest a heuristic aggregation (HA) method which is defined for a fixed number of samples n: for each cell Dr, consider the set of count vectors after which an agent’s posterior would lie in Dr, and let cr be the average count vector in this set. Now when agents report r1, . . . , rm, simply sum the count vectors and choose ˆ✓= HA(r1, . . . , rm) = argmax✓p(✓|cr1 +. . .+crm). Thus, by simply translating the choice of cell Dr to a representative sample count an agent may have observed, we arrive at a weighted-majoritylike aggregation method. This simple method performs quite well in simulations, as Figure 3 shows. It also performs well in practice, as we will see in the next two subsections. 5.2 Aggregation Results for Existing Mechanisms We evaluate our heuristic aggregation method using the dataset collected from existing mechanisms in previous work [2]. Their dataset is collected by asking workers to answer a multi-choice question and select one of the two confidence levels at the same time. We compared our heuristic aggregation (HA) with simple majority voting (Maj) as adopted in their paper. For our heuristics, we used the model with n = 4 and ✏= 0.85 for every case here; this was the simplest model for which every cell in every partition contained at least one possible posterior. Our results are fairly robust to the choice of the model subject to this constraint, however, and often other models perform even better. In Figure 4, we demonstrate the aggregation results for one of their tasks (“National Flags”) in the dataset. Although the improvement is relatively small, it is statistically significant for every setting plotted. Our HA outperformed Maj for all of their datasets and for all values of m. 7 Figure 4: The prediction error of aggregating data collect from existing mechanisms in previous work [2]. Figure 5: The prediction error of aggregating data collected from Amazon Mechanical Turk. 5.3 Experiments on Amazon Mechanical Turk We conducted experiments on Amazon Mechanical Turk (mturk.com) to evaluate our interface design. Our goal was to examine whether workers respond to different interfaces, and whether the interface and aggregation derived from our framework actually leads to better predictions. Experiment setup. In our experiment, workers are asked to label 20 blurred images of textures. We considered an asymmetric prior: 80% of the images were carpet and 20% were granite, and we communicated this to the workers. Upon accepting the task, workers were randomly assigned to one of two treatments: Baseline or ProbBased. Both offered a base payment of 10 cents, but the bonus payments on the 5 randomly chosen “ground truth” images differed between the treatments. The Baseline treatment is the mostly commonly seen interface in crowdsourcing markets. For each image, the worker is asked to choose from {Carpet, Granite}. She can get a bonus of 4 cents for each correct answer in the ground truth set. In the ProbBased interface, the worker was asked whether she thinks the probability of the image to be Carpet is {more than 80%, no more than 80%}. From Section 4.2, this threshold is optimal when we aim to aggregate information from a potentially large number of agents. To simplify the discussion, we map the two options to {Carpet, Granite} for the rest of this section. For the 5 randomly chosen ground truth images, the worker would get 2 cents for each correct answer of carpet images, and get 8 cents for each correct answer of granite images. We tuned the bonus amount such that the expected bonus for answering all questions correctly is approximately the same for each treatment. One can also easily check that for these bonus amounts, workers maximize their expected bonus by honestly reporting their beliefs. Results. This experiment is completed by 200 workers, 105 in Baseline and 95 in ProbBased. We first observe whether workers’ responses differ for different interfaces. In particular, we compare the ratio of workers reporting Granite. As shown in Figure 6 (in Appendix A), our result demonstrates that workers do respond to our interface design and are more likely to choose Granite for all images. The differences are statistically significant (p < 0.01). We then examine whether this interface combined with our heuristic aggregation leads to better predictions. We perform majority voting (Maj) for Baseline, and apply our heuristic aggregation (HA) to ProbBased. We choose the simplest model (n = 1) for HA though the results are robust for higher n. Figure 5 shows that our interface leads to considerably smaller aggregation for different numbers of randomly selected workers. Performing HA for Baseline and Maj for ProbBased both led to higher aggregation errors, which underscores the importance of matching the aggregation to the interface. 6 Conclusion We have developed a Bayesian framework to model the elicitation and aggregation of categorical data, giving a principled way to aggregate information collected from arbitrary interfaces, but also to design the interfaces themselves. Our simulation and experimental results show the benefit of our framework, resulting in significant prediction performance gains over standard interfaces and aggregation methods. Moreover, our theoretical and simulation results give new insights into the design of optimal interfaces, some of which we confirm experimentally. While certainly more experiments are needed to fully validate our methods, we believe our general framework to have value when designing interfaces and aggregation policies for eliciting categorical information. 8 Acknowledgments We thank the anonymous reviewers for their helpful comments. This research was partially supported by NSF grant CCF-1512964, NSF grant CCF-1301976, and ONR grant N00014-15-1-2335. References [1] R. M. Frongillo, Y. Chen, and I. Kash. Elicitation for aggregation. In The Twenty-Ninth AAAI Conference on Artificial Intelligence, 2015. [2] N. B. Shah and D. Zhou. Double or Nothing: Multiplicative Incentive Mechanisms for Crowdsourcing. In Neural Information Processing Systems, NIPS ’15, 2015. [3] Glenn W. Brier. Verification of forecasts expressed in terms of probability. Monthly Weather Review, 78(1):1–3, 1950. [4] L. J. Savage. Elicitation of personal probabilities and expectations. Journal of the American Statistical Association, 66(336):783–801, 1971. [5] T. Gneiting and A. E. Raftery. Strictly Proper Scoring Rules, Prediction, and Estimation. Journal of the American Statistical Association, 102(477):359–378, 2007. [6] N.S. Lambert, D.M. Pennock, and Y. Shoham. Eliciting properties of probability distributions. In Proceedings of the 9th ACM Conference on Electronic Commerce, EC ’08, pages 129–138. ACM, 2008. [7] R. Frongillo and I. Kash. Vector-Valued Property Elicitation. In Proceedings of the 28th Conference on Learning Theory, pages 1–18, 2015. [8] N. Miller, P. Resnick, and R. Zeckhauser. Eliciting informative feedback: The peer-prediction method. Management Science, 51(9):1359–1373, 2005. [9] D. Prelec. A bayesian truth serum for subjective data. Science, 306(5695):462–466, 2004. [10] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society: Series B, 39:1–38, 1977. [11] V. Raykar, S. Yu, L. Zhao, G. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. Journal of Machine Learning Research, 11:1297–1322, 2010. [12] S. R. Cholleti, S. A. Goldman, A. Blum, D. G. Politte, and S. Don. Veritas: Combining expert opinions without labeled data. In Proceedings 20th IEEE international Conference on Tools with Artificial intelligence, 2008. [13] R. Jin and Z. Ghahramani. Learning with multiple labels. In Advances in Neural Information Processing Systems, volume 15, pages 897–904, 2003. [14] J. Whitehill, P. Ruvolo, T. Wu, J. Bergsma, and J. Movellan. Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. In Advances in Neural Information Processing Systems, volume 22, pages 2035–2043, 2009. [15] A. P. Dawid and A. M. Skene. Maximum likeihood estimation of observer error-rates using the EM algorithm. Applied Statistics, 28:20–28, 1979. [16] D. R. Karger, S. Oh, and D. Shah. Iterative learning for reliable crowdsourcing systems. In The 25th Annual Conference on Neural Information Processing Systems (NIPS), 2011. [17] D. R. Karger, S. Oh, and D. Shah. Budget-optimal crowdsourcing using low-rank matrix approximations. In Proc. 49th Annual Conference on Communication, Control, and Computing (Allerton), 2011. [18] J. Zou and D. C. Parkes. Get another worker? Active crowdlearning with sequential arrivals. In Proceedings of the Workshop on Machine Learning in Human Computation and Crowdsourcing, 2012. [19] C. Ho, S. Jabbari, and J. W. Vaughan. Adaptive task assignment for crowdsourced classification. In The 30th International Conference on Machine Learning (ICML), 2013. [20] N. Lambert and Y. Shoham. Eliciting truthful answers to multiple-choice questions. In Proceedings of the Tenth ACM Conference on Electronic Commerce, EC ’09, pages 109–118, 2009. [21] R. Frongillo and I. Kash. General truthfulness characterizations via convex analysis. In Web and Internet Economics, pages 354–370. Springer, 2014. [22] F. Aurenhammer. Power diagrams: properties, algorithms and applications. SIAM Journal on Computing, 16(1):78–96, 1987. [23] J. Witkowski and D. Parkes. A robust bayesian truth serum for small populations. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI ’12, 2012. [24] V. Sheng, F. Provost, and P. Ipeirotis. Get another label? Improving data quality using multiple, noisy labelers. In ACM SIGKDD Conferences on Knowledge Discovery and Data Mining (KDD), 2008. [25] P. Ipeirotis, F. Provost, V. Sheng, and J. Wang. Repeated labeling using multiple noisy labelers. Data Mining and Knowledge Discovery, 2014. 9 | 2016 | 1 |
5,994 | Stochastic Gradient Richardson-Romberg Markov Chain Monte Carlo Alain Durmus1, Umut S¸ims¸ekli1, ´Eric Moulines2, Roland Badeau1, Ga¨el Richard1 1: LTCI, CNRS, T´el´ecom ParisTech, Universit´e Paris-Saclay, 75013, Paris, France 2: Centre de Math´ematiques Appliqu´ees, UMR 7641, ´Ecole Polytechnique, France Abstract Stochastic Gradient Markov Chain Monte Carlo (SG-MCMC) algorithms have become increasingly popular for Bayesian inference in large-scale applications. Even though these methods have proved useful in several scenarios, their performance is often limited by their bias. In this study, we propose a novel sampling algorithm that aims to reduce the bias of SG-MCMC while keeping the variance at a reasonable level. Our approach is based on a numerical sequence acceleration method, namely the Richardson-Romberg extrapolation, which simply boils down to running almost the same SG-MCMC algorithm twice in parallel with different step sizes. We illustrate our framework on the popular Stochastic Gradient Langevin Dynamics (SGLD) algorithm and propose a novel SG-MCMC algorithm referred to as Stochastic Gradient Richardson-Romberg Langevin Dynamics (SGRRLD). We provide formal theoretical analysis and show that SGRRLD is asymptotically consistent, satisfies a central limit theorem, and its non-asymptotic bias and the mean squared-error can be bounded. Our results show that SGRRLD attains higher rates of convergence than SGLD in both finite-time and asymptotically, and it achieves the theoretical accuracy of the methods that are based on higher-order integrators. We support our findings using both synthetic and real data experiments. 1 Introduction Markov Chain Monte Carlo (MCMC) techniques are one of the most popular family of algorithms in Bayesian machine learning. Recently, novel MCMC schemes that are based on stochastic optimization have been proposed for scaling up Bayesian inference to large-scale applications. These so-called Stochastic Gradient MCMC (SG-MCMC) methods provide a fruitful framework for Bayesian inference, well adapted to massively parallel and distributed architecture. In this domain, a first and important attempt was made by Welling and Teh [1], where the authors combined ideas from the Unadjusted Langevin Algorithm (ULA) [2] and Stochastic Gradient Descent (SGD) [3]. They proposed a scalable MCMC framework referred to as Stochastic Gradient Langevin Dynamics (SGLD). Unlike conventional batch MCMC methods, SGLD uses subsamples of the data per iteration similar to SGD. Several extensions of SGLD have been proposed [4–12]. Recently, in [10] it has been shown that under certain assumptions and with sufficiently large number of iterations, the bias and the meansquared-error (MSE) of a general class of SG-MCMC methods can be bounded as O(γ) and O(γ2), respectively, where γ is the step size of the Euler-Maruyama integrator. The authors have also shown that these bounds can be improved by making use of higher-order integrators. In this paper, we propose a novel SG-MCMC algorithm, called Stochastic Gradient RichardsonRomberg Langevin Dynamics (SGRRLD) that aims to reduce the bias of SGLD by applying a numerical sequence acceleration method, namely the Richardson-Romberg (RR) extrapolation, which requires running two chains with different step sizes in parallel. While reducing the bias, SGRRLD also keeps the variance of the estimator at a reasonable level by using correlated Brownian motions. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. We show that the asymptotic bias and variance of SGRRLD can be bounded as O(γ2) and O(γ4), respectively. We also show that after K iterations, our algorithm achieves a rate of convergence for the MSE of order O(K−4/5), whereas this rate for SGLD and its extensions with first-order integrators is of order O(K−2/3). Our results show that by only using a first-order numerical integrator, the proposed approach can achieve the theoretical accuracy of methods that are based on higher-order integrators, such as the ones given in [10]. This accuracy can be improved even more by applying the RR extrapolation multiple times in a recursive manner [13]. On the other hand, since the two chains required by the RR extrapolation can be generated independently, the SGRRLD algorithm is well adapted to parallel and distributed architectures. It is also worth to note that our technique is quite generic and can be virtually applied to all the current SG-MCMC algorithms besides SGLD, provided that they satisfy rather technical weak error and ergodicity conditions. In order to assess the performance of the proposed method, we conduct several experiments on both synthetic and real datasets. We first apply our method on a rather simple Gaussian model whose posterior distribution is analytically available and compare the performance of SGLD and SGRRLD. In this setting, we also illustrate the generality of our technique by applying the RR extrapolation on Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) [6]. Then, we apply our method on a large-scale matrix factorization problem for a movie recommendation task. Numerical experiments support our theoretical results: our approach achieves improved accuracy over SGLD and SGHMC. 2 Preliminaries 2.1 Stochastic Gradient Langevin Dynamics In MCMC, one aims at generating samples from a target probability measure π that is known up to a multiplicative constant. Assume that π has a density with respect to the Lebesgue measure that is still denoted by π and given by π : θ →e−U(θ)/ R Rd e−U(˜θ)d˜θ where U : Rd →R is called the potential energy function. In practice, directly generating samples from π turns out to be intractable except for very few special cases, therefore one often needs to resort to approximate methods. A popular way to approximately generate samples from π is based on discretizations of a stochastic differential equation (SDE) that has π as an invariant distribution [14]. A common choice is the over-damped Langevin equation associated with π, that is the stochastic differential equation (SDE) given by dϑt = −∇U(ϑt)dt + √ 2dBt , (1) where (Bt)t≥0 is the standard d-dimensional Brownian motion. Under mild assumptions on U (cf. [2]), (ϑt)t≥0 is a well defined Markov process which is geometrically ergodic with respect to π. Therefore, if continuous sample paths from (ϑt)t≥0 could be generated, they could be used as approximate samples from π. However, this is not possible and therefore in practice we need to use a discretization of (1). The most common discretization is the Euler-Maruyama scheme, which boils down to applying the following update equation iteratively: θk+1 = θk −γk+1∇U(θk) + √2γk+1Zk+1, for k ≥0 with initial state θ0. Here, (γk)k≥1 is a sequence of non-increasing step sizes and (Zk)k≥1 is a sequence of independent and identically distributed (i.i.d.) d-dimensional standard normal random variables. This schema is called the Unadjusted Langevin Algorithm (ULA) [2]. When the sequence of the step sizes (γk)k≥0 goes to 0 as k goes to infinity, it has been shown in [15] and [16] that the empirical distribution of (θk)k≥0 weakly converges to π under certain assumptions. A central limit theorem for additive functionals has also been obtained in [17] and [16]. In Bayesian machine learning, π is often chosen as the Bayesian posterior, which imposes the following form on the potential energy: U(θ) = −(PN n=1 log p(xn|θ) + log p(θ)) for all θ ∈Rd, where x ≡{xn}N n=1 is a set of observed i.i.d. data points, belonging to Rm, for m ≥1, p(xn|·) : Rd →R∗ + is the likelihood function, and p(θ) : Rd →R∗ + is the prior distribution. In large scale settings, N becomes very large and therefore computing ∇U can be computationally very demanding, limiting the applicability of ULA. Inspired by stochastic optimization techniques, in [1], the authors have proposed replacing the exact gradient ∇U with an unbiased estimator and presented the SGLD algorithm that iteratively applies the following update equation: θk+1 = θk −γk+1∇˜Uk+1(θk) + p 2γk+1Zk+1 , (2) 2 where (∇˜Uk)k≥1 is a sequence of i.i.d. unbiased estimators of ∇U. In the following, the common distribution of (∇˜Uk)k≥1 will be denoted by L. A typical choice for the sequence of estimators (∇˜Uk)k≥1 of ∇U is to randomly draw an i.i.d. sequence of data subsample (Rk)k≥1 with Rk ⊂ [N] = {1, . . . , N} having a fixed number of elements |Rk| = B for all k ≥1. Then, set for all θ ∈Rd, k ≥1 ∇˜Uk(θ) = −[∇log p(θ) + N B X i∈Rk ∇log p(xi|θ)] . (3) Convergence analysis of SGLD has been studied in [18, 19] and it has been shown in [20] that for constant step sizes γk = γ > 0 for all k ≥1, the bias and the MSE of SGLD are of order O(γ + 1/(γK)) and O(γ2 + 1/(γK)), respectively. Recently, it has been shown that these bounds are also valid in a more general family of SG-MCMC methods [10]. 2.2 Richardson-Romberg Extrapolation for SDEs Richardson-Romberg extrapolation is a well-known method in numerical analysis, which aims to improve the rate of convergence of a sequence. Talay and Tubaro [21] showed that the rate of convergence of Monte Carlo estimates on certain SDEs can be radically improved by using an RR extrapolation that can be described as follows. Let us consider the SDE in (1) and its Euler discretization with exact gradients and fixed step size, i.e. γk = γ > 0 for all k ≥1. Under mild assumptions on U (cf. [22]), the homogeneous Markov chain (θk)k≥0 is ergodic with a unique invariant distribution πγ, which is different from the target distribution π. However, [21] showed that for f sufficiently smooth with polynomial growth, there exists a constant C, which only depends on π and f such that πγ(f) = π(f) + Cγ + O(γ2), where π(f) = R Rd f(x)π(dx). By exploiting this result, RR extrapolation suggests considering two different discretizations of the same SDE with two different step sizes γ and γ/2. Then instead of πγ(f), if we consider 2πγ/2(f) −πγ(f) as the estimator, we obtain π(f) −(2πγ/2(f) −πγ(f)) = O(γ2). In the case where the sequence (γk)k≥0 goes to 0 as k →+∞, it has been observed in [23] that the estimator defined by RR extrapolation satisfies a CLT. The applications of RR extrapolation to SG-MCMC have not yet been explored. 3 Stochastic Gradient Richardson-Romberg Langevin Dynamics In this study, we explore the use of RR extrapolation in SG-MCMC algorithms for improving their rates of convergence. In particular, we focus on the applications of RR extrapolation on the SGLD estimator and present a novel SG-MCMC algorithm referred to as Stochastic Gradient RichardsonRomberg Langevin Dynamics (SGRRLD). The proposed algorithm applies RR extrapolation on SGLD by considering two SGLD chains applied to the SDE (1), with two different sequences of step sizes satisfying the following relation. For the first chain, we consider a sequence of non-increasing step sizes (γk)k≥1 and for the second chain, we use the sequence of step sizes (ηk)k≥1 defined by η2k−1 = η2k = γk/2 for k ≥1. These two chains are started at the same point θ0 ∈Rd, and are run accordingly to (2) but the chain with the smallest step size is run twice more time than the other one. In other words, these two discretizations are run until the same time horizon PK k=1 γk, where K is the number of iterations. Finally, we extrapolate the two SGLD estimators in order to construct the new one. Each iteration of SGRRLD will consist of one step of the first SGLD chain with (γk)k≥1 and two steps of the second SGLD chain with (ηk)k≥1. More formally the proposed algorithm is defined by: consider a starting point θ(γ) 0 = θ(γ/2) 0 = θ0 and for k ≥0, Chain 1 : θ(γ) k+1 = θ(γ) k −γk+1∇˜U (γ) k+1 θ(γ) k + p 2γk+1Z(γ) k+1 , (4) Chain 2 : θ(γ/2) 2k+1 = θ(γ/2) 2k −γk+1 2 ∇˜U (γ/2) 2k+1 θ(γ/2) 2k+1 + √γk+1Z(γ/2) 2k+1 θ(γ/2) 2k+2 = θ(γ/2) 2k+1 −γk+1 2 ∇˜U (γ/2) 2k+2 θ(γ/2) 2k+1 + √γk+1Z(γ/2) 2k+2 (5) where (Z(γ/2) k )k≥1 and (Z(γ) k )k≥1 are two sequences of d-dimensional i.i.d. standard Gaussian random variables and (∇˜U (γ/2) k )k≥1, (∇˜U (γ) k )k≥1 are two sequences of i.i.d. unbiased estimators of ∇U with the same common distribution L, meaning that the mini-batch size has to be the same. 3 For a test function f : Rd →R, we then define the estimator of π(f) based on RR extrapolation as follows: (for all K ∈N∗) ˆπR K(f) = K+1 X k=2 γk !−1 K X k=1 γk+1 h {f(θ(γ/2) 2k−1) + f(θ(γ/2) 2k )} −f(θ(γ) k ) i , (6) We provide a pseudo-code of SGRRLD in the supplementary document. Under mild assumptions on ∇U and the law L (see the conditions in the Supplement), by [19, Theorem 7] we can show that ˆπR K(f) is a consistent estimator of π(f): when limk→+∞γk = 0 and limK→+∞ PK k=1 γk+1 = +∞, then limK→+∞ˆπR K(f) = π(f) almost surely. However, it is not immediately clear whether applying an RR extrapolation would provide any advantage over SGLD in terms of the rate of convergence. Even if RR extrapolation were to reduce the bias of the SGLD estimator, this improvement could be offset by an increase of variace. In the context of a general class of SDEs, in [13] it has been shown that the variance of estimator based on RR extrapolation can be controlled by using correlated Brownian increments and the best choice in this sense is in fact taking the two sequences (Z(γ/2) k )k≥1 and (Z(γ) k )k≥1 perfectly correlated, i.e. for all k ≥1, Z(γ) k = (Z(γ/2) 2k−1 + Z(γ/2) 2k )/ √ 2 . (7) This choice has also been justified in the context of the sampling of the stationary distribution of a diffusion in [23] through a central limit theorem. Inspired by [23], in order to be able to control the variance of the SGRRLD estimator, we consider correlated Brownian increments. In particular, we assume that the Brownian increments in (4) and (5) satisfy the following relationship: there exist a matrix Σ ∈Rd×d, a sequence (Wk)k≥1 of d dimensional i.i.d. standard Gaussian random variables, independent of (Z(γ/2) k )k≥1 such that Id −Σ⊤Σ is a positive semidefinite matrix and for all k ≥0, Z(γ) k+1 = Σ⊤(Z(γ/2) 2k+1 + Z(γ/2) 2(k+1))/ √ 2 + (Id −Σ⊤Σ)1/2Wk+1 , (8) where Id denotes the identity matrix. In Section 4, we will show that the properly scaled SGRRLD estimator converges to a Gaussian random variable whose variance is minimal when Σ = Id, and therefore Z(γ) k+1 should be chosen as in (7). Accordingly, (8) justifies the choice of using the same Brownian motion in the two discretizations, extending the results of [23] to SG-MCMC. On the other hand, regarding the sequences of estimators for ∇U, we assume that they can also be correlated but do not assume an explicit form on their relation. However, it is important to note that if the two sequences (∇˜U (γ/2) k )k≥1 and (∇˜U (γ) k )k≥1 do not have the same common distribution, then the SGRRLD estimator can have a bias, which would have the same order as of vanilla SGLD (with the same sequence of step sizes). In the particular case of (3), in order for SGRRLD to gain efficiency compared to SGLD, the mini-batch size has to be the same for the two chains. 4 Convergence Analysis We analyze asymptotic and non-asymptotic properties of SGRRLD. In order to save space and avoid obscuring the results, we present the technical conditions under which the theorems hold, and the full proofs in the supplementary document. We first present a central limit theorem for the estimator ˆπR K(f) of π(f) (see (6)) for a smooth function f. Let us define Γ(n) K = PK k=1 γn k+1 and ΓK = Γ(1) K , for all n ∈N. Theorem 1. Let f : Rd →R be a smooth function and (γk)k≥1 be a nonincreasing sequence satisfying limk→+∞γk = 0 and limK→+∞ΓK = +∞. Let (θ(γ) k , θ(γ/2) k )k≥0 be defined by (4)(5), started at θ0 ∈Rd and assume that the relation (8) holds for Σ ∈Rd×d. Under appropriate conditions on U, f and L, then the following statements hold: a) If limK→+∞Γ(3) K /√ΓK = 0, then √ΓK ˆπR K(f) −π(f) converges in law as K goes to infinity to a zero-mean Gaussian random variable with variance σ2 R, which is minimized when Σ = Id. b) If limK→+∞Γ(3) K /√ΓK = κ ∈(0, +∞), then √ΓK ˆπR K(f) −π(f) converges in law as K goes to infinity to a Gaussian random variable with variance σ2 R and mean κ µR. 4 c) If limK→+∞Γ(3) K /√ΓK = +∞, then (ΓK/Γ(3) k ) ˆπR K(f) −π(f) converges in probability as K goes to infinity to µR. The expressions of σ2 R and µR are given in the supplementary document. Proof (Sketch). The proof follows the same strategy as the one in [23, Theorem 4.3] for ULA. We assume that the Poisson equation associated with f has a solution g ∈C9(Rd). Then, the proof consists in making a 7th order Taylor expansion for g(θ(γ) k+1), g(θ(γ/2) 2k ) and g(θ(γ) 2k+1) at θ(γ) k , θ(γ/2) 2k−1 and θ(γ/2) 2k , respectively. Then ˆπR K(f) −π(f) is decomposed as a sum of three terms A1,K +A2,K +A3,K. A1,K is the fluctuation term and Γ1/2 K A1,K converges to a zero-mean Gaussian random variable with variance σ2 R. A2,K is the bias term, and ΓKA2,K/Γ(3) K converges in probability to µR as K goes to +∞if limK→+∞Γ(3) K = +∞. Finally the last term Γ1/2 K A3,K goes to 0 as K goes to +∞. The detailed proof is given in the supplementary document. These results state that the Gaussian noise dominates the stochastic gradient noise. Moreover, we also observe that the correlation between the two sequences of Gaussian random variables (Z(γ) k )k≥1 and (Z(γ/2) k )k≥1 has an important impact on the asymptotic convergence of ˆπR(f), whereas the correlation of the two sequences of stochastic gradients does not. A typical choice of decreasing sequence (γk)k≥1 is of the form γk = γ1k−α for α ∈(0, 1]. With such a choice, Theorem 1 states that ˆπR(f) converges to π(f) at a rate of convergence of order O(K−((1−α)/2)∧(2α)), where a ∧b = min(a, b). Therefore, the optimal choice for the exponent α for obtaining the fastest convergence turns out to be α = 1/5, which implies a rate of convergence of order O(K−2/5). Note that this rate is higher than SGLD whose optimal rate is of order O(K−1/3). Besides, α = 1/5 corresponds to the second point of Theorem 1, in which there is an equal contribution of the bias and the fluctuation at an asymptotic level. Futher discussions and detailed calculations can be found in the supplementary document. We now derive non-asymptotic bounds for the bias and the MSE of the estimator ˆπR(f). Theorem 2. Let f : Rd →R be a smooth function and (γk)k≥1 be a nonincreasing sequence such that there exists K1 ≥1, γK1 ≤1 and limK→+∞ΓK = +∞. Let (θ(γ) k , θ(γ/2) k )k≥0 be defined by (4)-(5), started at θ0 ∈Rd. Under appropriate conditions on U, f and L, then there exists C ≥0 such that for all K ∈N, K ≥1: BIAS: E ˆπR K(f) −π(f) ≤(C/ΓK) n Γ(3) K + 1 o MSE: E h ˆπR K(f) −π(f) 2i ≤C{(Γ(3) K /ΓK)2 + 1/ΓK} . Proof (Sketch). The proof follows the same strategy as the one of Theorem 1, but instead of establishing the exact convergence of the fluctuation and the bias terms, we just give an upper bound for these two terms. The detailed proof is given in the supplementary document. It is important to observe that the constant C which appears in Theorem 2 depends on moments of the estimator of the gradient. For fixed step size γk = γ for all k ≥1, Theorem 2 shows that the bias is of order O(γ2 + 1/(Kγ)). Therefore, if the number of iterations K is fixed then the choice of γ which minimizes this bound is γ ∝K−1/3, obtained by differentiating x 7→x2 + (xK)−1. Choosing this value for γ leads to the optimal rate for the bias of order O(K−2/3). Note that this bound is better than SGLD for which the optimal bound of the bias at fixed K is of order O(K−1/2). The same approach can be applied to the MSE which is of order O(γ4 + 1/(Kγ)). Then, the optimal choice of the step size is γ = O(K−1/5), leading to a bound of order O(K−4/5). Similar to the previous case, this bound is smaller than the bound obtained with SGLD, which is O(K−2/3). If we choose γk = γ1k−α for α ∈(0, 1], Theorem 2 shows that the bias and the MSE go to 0 as K goes to infinity. More precisely for α ∈(0, 1), the bound for the bias is O(K−(2α)∧(1−α)), and is therefore minimal for α = 1/3. As for the MSE, the bound provided by Theorem 2 is O(K−(4α)∧(1−α)) which is consistent with Theorem 1, leading to an optimal bound of order O(K−4/5) as α = 1/5. 5 1.8 2 2.2 2.4 θ 0 1 2 3 4 5 6 7 p(θ|x) True SGLD SGRRLD (a) 1 5 10 20 Dimension (d) 10-7 10-6 10-5 10-4 MSE SGLD SGRRLD (b) Figure 1: The performance of SGRRLD on synthetic data. (a) The true posterior and the estimated posteriors. (b) The MSE for different problem sizes. 5 Experiments 5.1 Linear Gaussian Model We conduct our first set of experiments on synthetic data where we consider a simple Gaussian model whose posterior distribution is analytically available. The model is given as follows: θ ∼N(0, σ2 θ Id) , xn|θ ∼N(a⊤ n θ, σ2 x) , for all n . (9) Here, we assume that the explanatory variables {an}N n=1 ∈RN×d, σ2 θ and σ2 x are known and we aim to draw samples from the posterior distribution p(θ|x). In all the experiments, we first randomly generate an ∼N(0, 0.5 Id) and we generate the true θ and the response variables x by using the generative model given in (9). All our experiments are conducted on a standard laptop computer with 2.5GHz Quad-core Intel Core i7 CPU, and in all settings, the two chains of SGRRLD are run in parallel. In our first experiment, we set d = 1, σ2 θ = 10, σ2 x = 1, N = 1000, and the size of each minibatch B = N/10. We fix the step size to γ = 10−3. In order to ensure that both algorithms are run for a fixed computation time, we run SGLD for K = 21000 iterations where we discard the first 1000 samples as burn-in, and we run SGRRLD for K = 10500 iterations accordingly, where we discard the samples generated in the first 500 iterations as burn-in. Figure 1(a) shows the typical results of this experiment. In particular, in the left figure, we illustrate the true posterior distribution and the Gaussian density N(ˆµpost, ˆσ2 post) for both algorithms, where ˆµpost and ˆσ2 post denote the empirical posterior mean and variance, respectively. In the right figure, we monitor the bias of the estimated variance as a function of computation time. The results show that SGLD overestimates the posterior variance, whereas SGRRLD is able to reduce this error significantly. We also observe that the results support our theory: the bias of the estimated variance is ≈10−2 for SGLD whereas this bias is reduced to ≈10−4 with SGRRLD. 10-6 10-5 10-4 10-3 Step size (γ) 10-5 10-4 10-3 10-2 Bias SGLD SGRRLD 10-6 10-5 10-4 10-3 Step size (γ) 10-8 10-7 10-6 10-5 10-4 MSE SGLD SGRRLD Figure 2: Bias and MSE of SGLD and SGRRLD for different step sizes. In our second experiment, we fix γ and K and monitor the MSE of the posterior covariance as a function of the dimension d of the problem. In order to measure the MSE, we compute the squared Frobenius norm of the difference between the true posterior covariance and the estimated covariance. Similarly to the previous experiment, we average 100 runs that are initialized randomly. The results are shown in Figure 1(b). The results clearly show that SGRRLD provides significant performance improvement over SGLD, where the MSE of SGRRLD is in the order of the square of the MSE of SGLD for all values of d. In our next experiment, we use the same setting as in the first experiment and we monitor the bias and the MSE of the estimated variance as a function of the step size γ. For evaluation, we average 100 runs that are initialized randomly. As depicted in Figure 2, the results show that SGRRLD yields 6 significantly better results than SGLD in terms of both the bias and MSE. Note that for very small γ, the bias and MSE increase. This is due to the term 1/(Kγ) in the bounds of Theorem 2 dominates both the bias and the MSE as expected since K is fixed. Therefore, we observe a drop in the bias and the MSE as we increase γ up to ≈8 × 10−5, and then they gradually increase along with γ. Figure 3: Bias and MSE of SGRRLD with different rates for step size (α). We conduct the next experiment in order to check the rate of convergence that we have derived in Theorem 2 for fixed step size γk = γ for all k ≥1. We observe that the optimal choice for the step size is of the form γ = γ⋆ bK−1/3 and γ = γ⋆ MK−0.2 for the bias and MSE, respectively. To confirm our findings, we first need to determine the constants γ⋆ b and γ⋆ M, which can be done by using the results from the previous experiment. Accordingly, we observe that γ⋆ b ≈8.5·10−5 · (20000)1/3 ≈2 · 10−3 and γ⋆ M ≈1.7 · 10−4 · (20000)0.2 ≈10−3. Then, to confirm the right dependency of γ on K, we fix K = 106 and monitor the bias with the sequence of step sizes γ = γ⋆ bK−α and the MSE with γ = γMK−α for several values of α as given in Figure 3. It can be observed that the optimal convergence rate is still obtained for α = 1/3 for the bias and α = 0.2 for the MSE, which confirms the results of Theorem 2. For a decreasing sequence of step sizes γk = γ⋆ 1kα for α ∈(0, 1], we conduct a similar experiment to confirm that the best convergence rate is achieved choosing α = 1/3 in the case of the bias and α = 0.2 in the case of the MSE. The resulting figures can be found in the supplementary document. 10-4 10-3 10-2 Step size (γ) 10-6 10-5 10-4 10-3 10-2 Bias SGHMC SGHMC-s SGRRHMC 10-4 10-3 10-2 Step size (γ) 10-8 10-7 10-6 10-5 10-4 MSE SGHMC SGHMC-s SGRRHMC Figure 4: The performance of RR extrapolation on SGHMC. In our last synthetic data experiment, instead of SGLD, we consider another SG-MCMC algorithm, namely the Stochastic Gradient Hamiltonian Monte Carlo (SGHMC) [6]. We apply the proposed extrapolation scheme described in Section 3 to SGHMC and call the resulting algorithm Stochastic Gradient RichardsonRomberg Hamiltonian Monte Carlo (SGRRHMC). In this experiment, we use the same setting as we use in Figure 2, and we monitor the bias and the MSE of the estimated variance as a function of γ. We compare SGRRHMC against SGHMC with Euler discretization [6] and SGHMC with an higher-order splitting integrator (SGHMC-s) [10] (we describe SGHMC, SGHMC-s, and SGRRHMC in more detail in the supplementary document). We average 100 runs that are initialized randomly. As given in Figure 4, the results are similar to the ones obtained in Figure 2: for large enough γ, SGRRHMC yields significantly better results than SGHMC. For small γ, the term 1/(Kγ) in the bound derived in Theorem 2 dominates the MSE and therefore SGRRHMC requires a larger K for improving over SGHMC. For large enough values of γ, we observe that SGRRHMC obtains an MSE similar to that of SGHMC-s with small γ, which confirms our claim that the proposed approach can achieve the accuracy of the methods that are based on higher-order integrators. 5.2 Large-Scale Matrix Factorization In our second set of experiments, we evaluate our approach on a large-scale matrix factorization problem for a link prediction application, where we consider the following probabilistic model: Wip ∼N(0, σ2 w), Hpj ∼N(0, σ2 h), Xij|W, H ∼N P p WipHpj, σ2 x , where X ∈RI×J is the observed data matrix with missing entries, and W ∈RI×P and H ∈RD×P are the latent factors, 7 (a) MovieLens-1Million (b) MovieLens-10Million (c) MovieLens-20Million Figure 5: The performance of SGRRLD on large-scale matrix factorization problems. whose entries are i.i.d. distributed. The aim in this application is to predict the missing values of X by using a low-rank approximation. This model is similar to the Bayesian probabilistic matrix factorization model [24] and it is often used in large-scale matrix factorization problems [25], in which SG-MCMC has been shown to outperform optimization methods such as SGD [26]. In this experiment, we compare SGRRLD against SGLD on three large movie ratings datasets, namely the MovieLens 1Million (ML-1M), MovieLens 10Million (ML-10M), and MovieLens 20Million (ML-20M) (grouplens.org). The ML-1M dataset contains about 1 million ratings applied to I = 3883 movies by J = 6040 users, resulting in a sparse observed matrix X with 4.3% non-zero entries. The ML-10M dataset contains about 10 million ratings applied to I = 10681 movies by J = 71567 users, resulting in a sparse observed matrix X with 1.3% non-zero entries. Finally, The ML-20M dataset contains about 20 million ratings applied to I = 27278 movies by J = 138493 users, resulting in a sparse observed matrix X with 0.5% non-zero entries. We randomly select 10% of the data as the test set and use the remaining data for generating the samples. The rank of the factorization is chosen as P = 10. We set σ2 w = σ2 h = σ2 x = 1. For all datasets, we use a constant step size. We run SGLD for K = 10500 iterations where we discard the first 500 samples as burn-in. In order to keep the computation time the same, we run SGRRLD for K = 5250 iterations where we discard the first 250 iterations as burn-in. For ML-1M we set γ = 2 × 10−6 and for ML-10M and ML-20M we set γ = 2 × 10−5. The size of the subsamples B is selected as N/10, N/50, and N/500 for ML-1M, ML-10M and ML-20M, respectively. We have implemented SGLD and SGRRLD in C by using the GNU Scientific Library for efficient matrix computations. We fully exploit the inherently parallel structure of SGRRLD by running the two chains in parallel as two independent processes, whereas SGLD cannot benefit from this parallel computation architecture due to its inherently sequential nature. Therefore their wall-clock times are nearly exactly the same. Figure 5 shows the comparison of SGLD and SGRRLD in terms of the root mean squared-errors (RMSE) that are obtained on the test sets as a function of wall-clock time. The results clearly show that in all datasets SGRRLD yields significant performance improvements. We observe that in the ML-1M experiment SGRRLD requires only ≈200 seconds for achieving the accuracy that SGLD provides after ≈400 seconds. We see similar behaviors in the ML-10M and ML-20M experiments: SGRRLD appears to be more efficient than SGLD. The results indicate that by using our approach, we either obtain the same accuracy of SGLD in shorter time or we obtain a better accuracy by spending the same amount of time as SGLD. 6 Conclusion We presented SGRRLD, a novel scalable sampling algorithm that aims to reduce the bias of SGMCMC while keeping the variance at a reasonable level by using RR extrapolation. We provided formal theoretical analysis and showed that SGRRLD is asymptotically consistent and satisfies a central limit theorem. We further derived bounds for its non-asymptotic bias and the mean squarederror, and showed that SGRRLD attains higher rates of convergence than all known SG-MCMC methods with first-order integrators in both finite-time and asymptotically. We supported our findings using both synthetic and real data experiments, where SGRRLD appeared to be more efficient than SGLD in terms of computation time on a large-scale matrix factorization application. As a next step, we plan to explore the use of the multi-level Monte Carlo approaches [27] in our framework. Acknowledgements: This work is partly supported by the French National Research Agency (ANR) as a part of the EDISON 3D project (ANR-13-CORD-0008-02). 8 References [1] M. Welling and Y. W Teh, “Bayesian learning via Stochastic Gradient Langevin Dynamics,” in ICML, 2011, pp. 681–688. [2] G. O. Roberts and R. L. Tweedie, “Exponential convergence of Langevin distributions and their discrete approximations,” Bernoulli, vol. 2, no. 4, pp. 341–363, 1996. [3] H. Robbins and S. Monro, “A stochastic approximation method,” Ann. Math. Statist., vol. 22, no. 3, pp. 400–407, 1951. [4] S. Ahn, A. Korattikara, and M. Welling, “Bayesian posterior sampling via stochastic gradient Fisher scoring,” in ICML, 2012. [5] S. Patterson and Y. W. Teh, “Stochastic gradient Riemannian Langevin dynamics on the probability simplex,” in NIPS, 2013. [6] T. Chen, E. B. Fox, and C. Guestrin, “Stochastic gradient Hamiltonian Monte Carlo,” in ICML, 2014. [7] N. Ding, Y. Fang, R. Babbush, C. Chen, R. D. Skeel, and H. Neven, “Bayesian sampling using stochastic gradient thermostats,” in NIPS, 2014, pp. 3203–3211. [8] X. Shang, Z. Zhu, B. Leimkuhler, and A. J. Storkey, “Covariance-controlled adaptive Langevin thermostat for large-scale Bayesian sampling,” in NIPS, 2015, pp. 37–45. [9] Y. A. Ma, T. Chen, and E. Fox, “A complete recipe for stochastic gradient MCMC,” in NIPS, 2015, pp. 2899–2907. [10] C. Chen, N. Ding, and L. Carin, “On the convergence of stochastic gradient MCMC algorithms with high-order integrators,” in NIPS, 2015, pp. 2269–2277. [11] C. Li, C. Chen, D. Carlson, and L. Carin, “Preconditioned stochastic gradient Langevin dynamics for deep neural networks,” in AAAI Conference on Artificial Intelligence, 2016. [12] U. S¸ims¸ekli, R. Badeau, A. T. Cemgil, and G. Richard, “Stochastic quasi-Newton Langevin Monte Carlo,” in ICML, 2016. [13] G. Pages, “Multi-step Richardson-Romberg extrapolation: remarks on variance control and complexity,” Monte Carlo Methods and Applications, vol. 13, no. 1, pp. 37, 2007. [14] U. Grenander, “Tutorial in pattern theory,” Division of Applied Mathematics, Brown University, Providence, 1983. [15] D. Lamberton and G. Pag`es, “Recursive computation of the invariant distribution of a diffusion: the case of a weakly mean reverting drift,” Stoch. Dyn., vol. 3, no. 4, pp. 435–451, 2003. [16] V. Lemaire, Estimation de la mesure invariante d’un processus de diffusion, Ph.D. thesis, Universit´e Paris-Est, 2005. [17] D. Lamberton and G. Pag`es, “Recursive computation of the invariant distribution of a diffusion,” Bernoulli, vol. 8, no. 3, pp. 367–405, 2002. [18] I. Sato and H. Nakagawa, “Approximation analysis of stochastic gradient Langevin dynamics by using Fokker-Planck equation and Ito process,” in ICML, 2014, pp. 982–990. [19] Y. W. Teh, A. H. Thi´ery, and S. J. Vollmer, “Consistency and fluctuations for stochastic gradient Langevin dynamics,” Journal of Machine Learning Research, vol. 17, no. 7, pp. 1–33, 2016. [20] Y. W. Teh, S. J. Vollmer, and K. C. Zygalakis, “(Non-)asymptotic properties of Stochastic Gradient Langevin Dynamics,” arXiv preprint arXiv:1501.00438, 2015. [21] D. Talay and L. Tubaro, “Expansion of the global error for numerical schemes solving stochastic differential equations,” Stochastic Anal. Appl., vol. 8, no. 4, pp. 483–509 (1991), 1990. [22] J. C. Mattingly, A. M. Stuart, and D. J. Higham, “Ergodicity for SDEs and approximations: locally Lipschitz vector fields and degenerate noise,” Stochastic Process. Appl., vol. 101, no. 2, pp. 185–232, 2002. [23] V. Lemaire, G. Pag`es, and F. Panloup, “Invariant measure of duplicated diffusions and application to Richardson–Romberg extrapolation,” Ann. Inst. H. Poincar´e Probab. Statist., vol. 51, no. 4, pp. 1562–1596, 11 2015. [24] R. Salakhutdinov and A. Mnih, “Bayesian probabilistic matrix factorization using Markov Chain Monte Carlo,” in ICML, 2008, pp. 880–887. [25] R. Gemulla, E. Nijkamp, Haas. P. J., and Y. Sismanis, “Large-scale matrix factorization with distributed stochastic gradient descent,” in ACM SIGKDD, 2011. [26] S. Ahn, A. Korattikara, N. Liu, S. Rajan, and M. Welling, “Large-scale distributed Bayesian matrix factorization using stochastic gradient MCMC,” in KDD, 2015. [27] V. Lemaire and G. Pages, “Multilevel Richardson-Romberg extrapolation,” arXiv preprint arXiv:1401.1177, 2014. 9 | 2016 | 10 |
5,995 | Active Learning with Oracle Epiphany Tzu-Kuo Huang ∗ Uber Advanced Technologies Group Pittsburgh, PA 15201 Lihong Li Microsoft Research Redmond, WA 98052 Ara Vartanian University of Wisconsin–Madison Madison, WI 53706 Saleema Amershi Microsoft Research Redmond, WA 98052 Xiaojin Zhu University of Wisconsin–Madison Madison, WI 53706 Abstract We present a theoretical analysis of active learning with more realistic interactions with human oracles. Previous empirical studies have shown oracles abstaining on difficult queries until accumulating enough information to make label decisions. We formalize this phenomenon with an “oracle epiphany model” and analyze active learning query complexity under such oracles for both the realizable and the agnostic cases. Our analysis shows that active learning is possible with oracle epiphany, but incurs an additional cost depending on when the epiphany happens. Our results suggest new, principled active learning approaches with realistic oracles. 1 Introduction There is currently a wide gap between theory and practice of active learning with oracle interaction. Theoretical active learning assumes an omniscient oracle. Given a query x, the oracle simply answers its label y by drawing from the conditional distribution p(y | x). This oracle model is motivated largely by its convenience for analysis. However, there is mounting empirical evidence from psychology and human-computer interaction research that humans behave in far more complex ways. The oracle may abstain on some queries [Donmez and Carbonell, 2008] (note this is distinct from classifier abstention [Zhang and Chaudhuri, 2014, El-Yaniv and Wiener, 2010]), or their answers can be influenced by the identity and order of previous queries [Newell and Ruths, 2016, Sarkar et al., 2016, Kulesza et al., 2014] and by incentives [Shah and Zhou, 2015]. Theoretical active learning has yet to account for such richness in human behaviors, which are critical to designing principled algorithms to effectively learn from human annotators. This paper takes a step toward bridging this gap. Specifically, we formalize and analyze the phenomenon of “oracle epiphany.” Consider active learning from a human oracle to build a webpage classifier on basketball sport vs. others. It is well-known in practice that no matter how simple the task looks, the oracle can encounter difficult queries. The oracle may easily answer webpage queries that are obviously about basketball or obviously not about the sport, until she encounters a webpage on basketball jerseys. Here, the oracle cannot immediately decide how to label (“Does this jersey webpage qualify as a webpage about basketball?”). One solution is to allow the oracle to abstain by answering with a special I-don’t-know label [Donmez and Carbonell, 2008]. More interestingly, Kulesza et al. [2014] demonstrated that with proper user interface support, the oracle may temporarily abstain on similar queries but then have an “epiphany”: she may suddenly decide how to label all basketball apparel-related webpages. Empirical evidence in [Kulesza et al., 2014] suggests that epiphany may be induced by the accumulative effect of seeing multiple similar queries. If a future basketball-jersey webpage query arrives, the oracle will no longer abstain but will answer ∗Part of this work was done while the author was with Microsoft Research. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. with the label she determined during epiphany. In this way, the oracle improves herself on the subset of the input space that corresponds to basketball apparel-related webpages. Empirical evidence also suggests that oracle abstention, and subsequent epiphany, may happen separately on different subsets of the input space. When building a cooking vs. others text classifier, Kulesza et al. [2014] observed oracle epiphany on a subset of cooking supplies documents, and separately on the subset of culinary service documents; on gardening vs. others, they observed separate oracle epiphany on plant information and on local garden documents; on travel vs. others, they observed separate oracle epiphany on photography, rental cars, and medical tourism documents. Our contributions are three-fold: (i) We formalize oracle epiphany in Section 2; (ii) We analyze EPICAL, a variant of the CAL algorithm [Cohn et al., 1994], for realizable active learning with oracle epiphany in Section 3. (iii) We analyze Oracular-EPICAL, a variant of the Oracular-CAL algorithm [Hsu, 2010, Huang et al., 2015], for agnostic active learning in Section 4. Our query complexity bounds show that active learning is possible with oracle epiphany, although we may incur a penalty waiting for epiphany to happen. This is verified with simulations in Section 5, which highlights the nuanced dependency between query complexity and epiphany parameters. 2 Problem Setting As in standard active learning, we are given a hypothesis class H ⊆YX for some input space X and a binary label set Y ≜{−1, 1}. There is an unknown distribution µ over X × Y, from which examples are drawn IID. The marginal distribution over X is µX. Define the expected classification error, or risk, of a classifier h ∈H to be err(h) ≜E(x,y)∼µ [1 (h(x) ̸= y)]. As usual, the active learning goal is as follows: given any fixed ϵ, δ ∈(0, 1), we seek an active learning algorithm which, with probability at least 1 −δ, returns a hypothesis with classification error at mostϵ after sending a “small” number of queries to the oracle. What is unique here is an “oracle epiphany model.” The input space consists of two disjoint sets X = K ∪U. The oracle knows the label for items in K (for “known”) but initially does not know the labels in U (for “unknown”). The oracle will abstain if a query comes from U (unless epiphany happens, see below). Furthermore, U is partitioned into K disjoint subsets U = U1 ∪U2 ∪. . . ∪UK. These correspond to the photograph/rental cars/medical tourism subsets in the travel task earlier. The active learner does not know the partitions nor K. When the active learner submits a query x ∈X to the oracle, the learner will receive one of three outcomes in Y+ ≜{−1, 1, ⊥}, where ⊥indicates I-don’t-know abstention. Importantly, we assume that epiphany is modeled as K Markov chains: Whenever a unique x ∈Uk is queried on some unknown region k ∈{1, . . . , K} which did not experience epiphany yet, the oracle has a probability β ∈[0, 1] of epiphany on that region. If epiphany happens, the oracle then understands how to label everything in Uk. In effect, the state of Uk is flipped from unknown to known. Epiphany is irrevocable: Uk will stay known from now on and the oracle will answer accordingly for all future x therein. Thus the oracle will only answer ⊥if Uk remains unknown. The requirement for a unique x is to prevent a trivial active learning algorithm which repeatedly queries the same ⊥item in an attempt to induce oracle epiphany. This requirement does not pose difficulty for analysis if µX is continuous on X, since all queries will be unique with probability one. Therefore, our oracle epiphany model is parameterized by (β, K, U1, . . . , UK). All our analyses below will be based on this epiphany model. Of course, the model is only an approximation to real human oracle behaviors; In Section 6 we will discuss more sophisticated epiphany models for future work. 3 The Realizable Case In this section, we study the realizable active learning case, where we assume there exists some h∗∈H such that the label of an example x ∈X is y = h∗(x). It follows that err(h∗) = 0. Although the realizability assumption is strong, the analysis is insightful on the role of epiphany. We will show that the worst-case query complexity has an additional 1/β dependence. We also discuss nice cases where this 1/β can be avoided depending on U’s interaction with the disagreement region. Furthermore, our analysis focuses on the K = 1 case; that is, the oracle has only one unknown region U = U1. This case is the simplest but captures the essence of the algorithm we propose in this section. 2 For convenience, we will drop the superscript and write U. In the next section, we will eliminate both assumptions, and present and analyze an algorithm for the agnostic case with an arbitrary K ≥1. We modify the standard CAL algorithm [Cohn et al., 1994] to accommodate oracle epiphany. The modified algorithm, which we call EPICAL for “epiphany CAL,” is given in Alg. 1. Like CAL, EPICAL receives a stream of unlabeled items; It maintains a version space; If the unlabeled item falls into the disagreement region of the version space the oracle is queried. The essential difference to CAL is that if the oracle answers ⊥, no update to the version space happens. The stopping criterion ensures that the true risk of any hypothesis in the version space is at most ϵ, with high probability. Algorithm 1 EPICAL Input: ϵ, δ, oracle, X, H Version space V ←H Disagreement region D ←{x ∈X | ∃h, h′ ∈V, h(x) ̸= h′(x)} for t = 1, 2, 3, . . . do Sample an unlabeled example from the marginal distribution restricted to D: xt ∼µX|D Query oracle with xt to get yt if yt ̸= ⊥then V ←{h ∈V | h(xt) = yt} D ←{x ∈X | ∃h, h′ ∈V, h(x) ̸= h′(x)} end if if µX(D) ≤ϵ then Return any h ∈V end if end for Our analysis is based on the following observation: before oracle epiphany and ignoring all queries that result in ⊥, EPICAL behaves exactly the same as CAL on an induced active-learning problem. The induced problem has input space K, but with a projected hypothesis space we detail below. Hence, standard CAL analysis bounds the number of queries to find a good hypothesis in the induced problem. Now consider the sequence of probabilities of getting a ⊥label in each step of EPICAL. If these probabilities tend to be small, EPICAL will terminate with anϵ-risk hypothesis without even having to wait for epiphany. If these probabilities tend to be large, we may often hit the unknown region U. But the number of such steps is bounded because epiphany will happen with high probability. Formally, we define the induced active-learning problem as follows. The input space is ¯X ≜K, and the output space is still Y. The sampling distribution is ¯µX(x) ≜µX(x)1 (x ∈K) /µX(K). The hypothesis space is the projection of H onto ¯X: ¯H ≜{¯h ∈Y¯X | ∃h ∈H, ∀x ∈¯X : ¯h(x) = h(x)}. Clearly, the induced problem is still realizable; let ¯h∗be the projected target hypothesis. Let θ be the disagreement coefficient [Hanneke, 2014] for the original problem without unknown regions. The induced problem potentially has a different disagreement coefficient: ¯θ ≜sup r>0 r−1 · Ex∼¯µX 1 ∃¯h ∈¯H s.t. ¯h∗(x) ̸= ¯h(x), Ex′∼¯µX 1 ¯h(x′) ̸= ¯h∗(x′) ≤r . Let ¯m be the number of queries required for the CAL algorithm to find a hypothesis of ϵ/2 risk with probability 1 −δ/4 in the induced problem. It is known [Hanneke, 2014, Theorem 5.1] that ¯m ≤¯ M ≜¯θ dim(¯H) ln ¯θ + ln 4 δ ln 2 ϵ ln 2 ϵ . where dim(·) is the VC dimension. Similarly, let mCAL be the number of queries required for CAL to find a hypothesis of ϵ risk with probability 1 −δ/4 in the original problem, and we have mCAL ≤MCAL ≜θ dim(H) ln θ + ln 4 δ ln 1 ϵ ln 1 ϵ . Furthermore, define m⊥≜|{t | yt = ⊥}| to be the number of queries in EPICAL for which the oracle returns ⊥. We define Ut to be U for an iteration t before epiphany, and ∅after that. We define Dt to be the disagreement region D at iteration t. Finally, define the unknown fraction within disagreement as αt ≜µX(Dt ∩Ut)/µX(Dt). We are now ready to state the main result of this section. Theorem 1. Given any ϵ and δ, EPICAL will, with probability at least 1 −δ, return an ˆh ∈H with err(ˆh) ≤ϵ, after making at most MCAL + ¯ M + 3 β ln 4 δ queries. 3 Remark The bound above consists of three terms. The first is the standard CAL query complexity bound with an omniscient oracle. The other two are the price we pay when the oracle is imperfect. The second term is the query complexity for finding a low-risk hypothesis in the induced active-learning problem. In situations where µX(U) = ϵ/2 and β ≪1, it is hard to induce epiphany, but it suffices to find a hypothesis from ¯H with ϵ/2 risk in the induced problem (which implies at most ϵ risk under the original distribution µX); it indicates ¯ M is unavoidable in some cases. The third term is roughly the extra query complexity required to induce epiphany. It is unavoidable in the worst case: when U = X, one has to wait for oracle epiphany to start collecting labeled examples to infer h∗; the average number of steps until epiphany is on the order of 1/β. Finally, note that not all three terms contribute simultaneously to the query complexity of EPICAL. As we will see in the analysis and in the experiments, usually one or two of them will dominate, depending on how U interacts with the disagreement region. Summing them up simplifies our exposition, without changing the order of the worst-case bounds. Our analysis starts with the definition of the following two events. Lemmas 2 and 3 show that they hold with high probability when running EPICAL; the proofs are delegated to Appendix A. Define: E⊥≜ m⊥≤1 β ln 4 δ and Eα ≜ |{t | αt > 1/2}| ≤2 β ln 4 δ . Lemma 2. Pr{E⊥} ≥1 −δ/4 . Lemma 3. Pr{Eα} ≥1 −δ/4. Lemma 4. Assume event Eα holds. Then, the number of queries from K before oracle epiphany or before EPICAL terminates, whichever happens first, is at most ¯m + 2 β ln 4 δ . Proof. (sketch) Denote the quantity by m. Before epiphany, V and D in EPICAL behave in exactly the same way as in CAL on K. It takes ¯m queries to get to ϵ/2 accuracy in K by the definition of ¯m. If m ≤¯m, then m < ¯m + 2 β ln 4 δ trivially, and we are done. Otherwise, it must be the case that αt > 1/2 for every step after V reaches ϵ/2 accuracy on K. Suppose not. Then there is a step t where αt ≤1/2. Note V reaching ϵ/2 accuracy on K implies µX(Dt) −µX(Dt ∩Ut) ≤ϵ/2. Together with αt = µX(Dt ∩Ut)/µX(Dt) ≤1/2, we have µX(Dt) < ϵ. But this would have triggered termination of EPICAL at step t, a contradiction. Since we assume Eα holds, we have m ≤¯m + 2 β ln 4 δ . Proof of Theorem 1. We will prove the query complexity bound, assuming (i) events E⊥and Eα hold; and (ii) ¯ M and MCAL successfully upper bound the corresponding query complexity of standard CAL. By Lemmas 2 and 3 and a union bound, the above holds with probability at least 1 −δ. Suppose epiphany happens before EPICAL terminates. By event E⊥and Lemma 4, the total number of queried examples before epiphany is at most ¯m + 3 β ln 4 δ. After epiphany, the total number of queries is no more than that of running CAL from scratch; this number is at most MCAL. Therefore, the total query complexity is at most ¯ M + MCAL + 3 β ln 4 δ . Suppose epiphany does not happen before EPICAL terminates. In this case, the number of queries in the unknown region is at most 1 β ln 4 δ (event E⊥), and the number of queries in the known region is at most ¯m + 2 β ln 4 δ (Lemma 4). Thus, the total number of queries is at most ¯ M + 3 β ln 4 δ . 4 The Agnostic Case In the agnostic setting the best hypothesis, h∗≜arg minh err(h), has a nonzero error. We want an active learning algorithm that, for a given accuracy ϵ > 0, returns a hypothesis h with small regret reg(h, h∗) ≜err(h) −err(h∗) ≤ϵ while making a small number of queries. Among existing agnostic active learning algorithms we choose to adapt the Oracular-CAL algorithm, first proposed by Hsu [2010] and later improved by Huang et al. [2015]. Oracular-CAL makes no assumption on H or µ, and can be implemented solely with an empirical risk minimization (ERM) subroutine, which is often well approximated by convex optimization over a surrogate loss in practice. This is a significant advantage over several existing agnostic algorithms, which either explicitly maintain a version space, as done in A2 [Balcan et al., 2006], or require a constrained ERM routine [Dasgupta et al., 2007] that may not be well approximated efficiently in practice. IWAL [Beygelzimer et al., 2010] and Active 4 Algorithm 2 Oracular-EPICAL 1: Set c1 ≜4 and c2 ≜2 √ 6 + 9. Let η0 ≜1 and ηt ≜12 t ln 32t|H| ln t δ , t ≥1. 2: Initialize labeled data Z0 ←∅, the version space V1 ←H, and the ERM h1 as any h ∈H. 3: for t = 1, 2, . . . do 4: Observe new example xt, where (xt, yt) i.i.d. ∼µ. 5: if xt ∈Dt ≜{x | x ∈X, ∃(h, h′) ∈V2 t s.t. h(x) ̸= h′(x)} then 6: Query oracle with xt. 7: Zt ← Zt−1 ∪{(xt, yt)}, oracle returns yt. Zt−1, oracle returns ⊥. 8: ut ←1 (oracle returns ⊥) . 9: else 10: Zt ←Zt−1 ∪{(xt, ht(xt))}. // update the labeled data with the current ERM’s prediction 11: ut ←0. 12: end if 13: err(h, Zt) ≜1 t Pt i=1 1 (xi ∈Di) (1−ui)1 (h(xi) ̸= yi)+1 (xi /∈Di) 1 (h(xi) ̸= hi(xi)) . 14: ht+1 ←arg minh∈H err(h, Zt). 15: bt ← 1 t Pt i=1 ui. 16: ∆t ←c1 p ηt err(ht+1, Zt) + c2(ηt + bt). 17: Vt+1 ←{h ∈H | err(h, Zt) −err(ht+1, Zt) ≤∆t}. 18: end for Cover [Huang et al., 2015] are agnostic algorithms that are implementable with an ERM routine, both using importance weights to correct for querying bias. But in the presence of ⊥’s, choosing proper importance weights becomes challenging. Moreover, the improved Oracular-CAL [Huang et al., 2015] we use2 has stronger guarantees than IWAL, and in fact, the best known worst-case guarantees among efficient, agnostic active learning algorithms. Our proposed algorithm, Oracular-EPICAL, is given in Alg. 2. Note t here counts unlabeled data, while in Alg. 1 it counts queries. Roughly speaking, Oracular-EPICAL also has an additive factor of O(K/β) compared to Oracular-CAL’s query complexity. It keeps a growing set Z of labeled examples. If the unlabeled example xt falls in the disagreement region, the algorithm queries its label: when the oracle returns a label yt, the algorithm adds xt and yt to Z; when the oracle returns ⊥, no update to Z happens. If xt is outside the disagreement region, the algorithm adds xt and the label predicted by the current ERM hypothesis ht(xt) to Z. Alg. 2 keeps an indicator ut, which records whether ⊥was returned on xt, and it always updates the ERM and the version space after every new xt. For simplicity we assume a finite H; this can be extended to H with finite VC dimension. The critical modification we make here to accommodate oracle abstention is that the threshold ∆t defining the version space additively depends on the average number of ⊥’s received up to round t. This allows us to show that Oracular-EPICAL retains the favorable bias guarantee of OracularCAL: with high probability, all of the imputed labels are consistent with the classifications of h∗, so imputation never pushes the algorithm away from h∗. Oracular-EPICAL only uses the version space in the disagreement test. With the same technique used by Oracular-CAL, summarized in Appendix B, the algorithm is able to perform the test solely with an ERM routine. We now state Oracular-EPICAL’s general theoretical guarantees, which hold for any oracle model, and then specialize them for the epiphany model in Section 2. We start with a consistency result: Theorem 5 (Consistency Guarantee). Pick any 0 < δ < 1/e and let ∆∗ t := c1 p ηt err(h∗)+c2(ηt + bt). With probability at least 1 −δ, the following holds for all t ≥1, err(h) −err(h∗) ≤4∆∗ t for all h ∈Vt+1, and (1) err(h∗, Zt) −err(ht+1, Zt) ≤∆t. (2) 2This improved version of Oracular-CAL defines the version space using a tighter threshold than the one used by Hsu [2010], and has the same worst-case guarantees as Active Cover [Huang et al., 2015]. 5 All hypotheses in the current version space, including the current ERM, have controlled expected regrets. Compared with Oracular-CAL’s consistency guarantee, this is worse by an additive factor of O(bt), the average number of ⊥’s over t examples. Importantly, h∗always remains in the version space, as implied by (2). This guarantees that all predicted labels used by the algorithm are consistent with h∗, since the entire version space makes the same prediction. The query complexity bound is: Theorem 6 (Query Complexity Bound). Let Qt ≜Pt i=1 1 (xi ∈Di) denote the total number of queries Alg. 2 makes after observing t examples. Under the conditions of Theorem 5, with probability at least 1 −δ the following holds: ∀t > 0, Qt is bounded by 4θ err(h∗)t + θ · O q t err(h∗) ln(t|H|/δ) ln2 t + ln(t|H|/δ) ln t + tbt ln t + 8 ln(8t2 ln t/δ) , where θ denotes the disagreement coefficient [Hanneke, 2014]. Again, this result is worse than Oracular-CAL’s query complexity [Huang et al., 2015] by an additive factor. The magnitude of this factor is less trivial than it seems: since the algorithm increases the threshold by bt, it includes more hypotheses in the version space, which may cause the algorithm to query a lot more. However, our analysis shows that the number of queries only increases by O(tbt ln t), i.e., ln t times the total number of ⊥’s received over t examples. The full proofs of both theorems are in Appendix C. Here we provide the key ingredient. Consider an imaginary dataset Z† t where all the labels queried by the algorithm but not returned by the oracle are imputed, and define the error on this imputed data: err(h, Z† t ) ≜1 t t X i=1 1 (xi ∈Di) 1 (h(xi) ̸= yi) + 1 (xi /∈Di) 1 (h(xi) ̸= hi(xi)) . (3) Note that the version space Vt and therefore the disagreement region Dt are still defined in terms of err(h, Zt), not err(h, Z† t ). Also define the empirical regrets between two hypotheses h and h′: reg(h, h′, Zt) ≜err(h, Zt) −err(h′, Zt) and reg(h, h′, Z† t ) on Z† t in the same way. The empirical error and regret on Z† t are not observable, but can be easily bounded by observable quantities: err(h, Zt) ≤err(h, Z† t ) ≤err(h, Zt) + bt, (4) |reg(h, h′, Zt) −reg(h, h′, Z† t )| ≤bt, (5) where bt = Pt i=1 ui/t is also observable. Using a martingale analysis resembling Huang et al. [2015]’s for Oracular-CAL, we prove concentration of the empirical regret reg(h, h∗, Z† t ) to its expectation. For every h ∈Vt+1, the algorithm controls its empirical regret on Zt , which bounds reg(h, h∗, Z† t ) by the above. This leads to a bound on the expected regret of h. The query complexity analysis follows the standard framework of Hsu [2010] and Huang et al. [2015]. Next, we specialize the guarantees to the oracle epiphany model in Section 2: Corollary 7. Assume the epiphany model in Section 2. Fix ϵ > 0, δ > 0. Let ˜d ≜ln(|H|/(ϵδ)), eK ≜ K ln(K/δ) and e∗≜err(h∗). With probability at least 1 −δ, the following holds: The ERM hypothesis htϵ+1 satisfies err(htϵ+1) −e∗≤ϵ, where tϵ = O ˜de∗ ϵ2 + 1 ϵ ˜d + e K β , and the total number of queries made up to round tϵ is θ · O e∗ ϵ ˜d·e∗ ϵ + e K β + ln e∗ ϵ2 + 1 ϵ ˜d + e K ϵβ · e∗ ϵ + 1 ˜d + e K β . The proof is in Appendix D. This corollary reveals how the epiphany parameters K and β affect query complexity. Setting eK = 0 recovers the result for a perfect oracle, showing that the (unlabeled) sample complexity tϵ worsens by an additive factor of eK/(βϵ) in both realizable and agnostic settings. For query complexity, in the realizable setting the bound becomes θ · O ln ˜d + eK/β /ϵ ˜d + eK/β . In the agnostic setting, the leading term in our bound is θ ·O (e∗/ϵ)2 ˜d+( eKe∗)/(βϵ) . In both cases, our bounds are worse by roughly an additive factor of O( eK/β) than bounds for perfect oracles. As for the effect of U, the above corollary is a worst-case result: it uses an upper bound on tbt that holds even for U = X. For certain U’s the upper bound can be much tighter. For example, if U ∩Dt = ∅for sufficiently large t, then tbt will be O(1) for all β, with or without epiphany. 6 5 Experiments To complement our theoretical results, we present two simulated experiments on active learning with oracle epiphany: learning a 1D threshold classifier and handwritten digit recognition (OCR). Specifically, we will highlight query complexity dependency on the epiphany parameterβ and on U. EPICAL on 1D Threshold Classifiers. Take µX to be the uniform distribution over the interval X = [0, 1]. Our hypothesis space is the set of threshold classifiers H = {ha : a ∈[0, 1]} where ha(x) = 1 (x ≥a). We choose h∗= h 1 2 and set the target classification error at ϵ = 0.05. We illustrate epiphany with a single unknown region K = 1, U = U1. However, we contrast two shapes of U: in one set of experiments we set U = [0.4, 0.6] which contains the decision boundary 0.5. In this case, the active learner EPICAL must induce oracle epiphany in order to achieve ϵ risk. In another set of experiments U = [0.7, 0.9], where we expect the learner to be able to “bypass” the need for epiphany. Intuitively, this latter U could soon be excluded from the disagreement region. For both U, we systematically vary the oracle epiphany parameterβ ∈{2−6, 2−5, . . . , 20}. A small β means epiphany is less likely per query, thus we expect the learner to spend more queries trying to induce epiphany in the case of U = [0.4, 0.6]. In contrast, β may not matter much in the case of U = [0.7, 0.9] since epiphany may not be required. Note that β = 20 = 1 reverts back to the standard active learning oracle, since epiphany always happens immediately. We run each combination of β, U 0.0 0.5 1.0 0 100 200 300 400 EPICAL Passive Queries β 0.0 0.5 1.0 0 100 200 300 400 EPICAL Passive Queries β 0 20 40 60 80 0 20 40 60 80 Excess queries 1/β (a) U = [0.4, 0.6] (b) U = [0.7, 0.9] (c) U = [0.4, 0.6] Figure 1: EPICAL results on 1D threshold classifiers for 10, 000 trials. The results are shown in Figure 1. As expected, (a) shows a clear dependency on β. This indicates that epiphany is necessary in the case U = [0.4, 0.6] for learning to be successful. In contrast, the dependence on β vanishes in (b) when U is shifted sufficiently away from the target threshold (and thus from later disagreement regions). The oracle need not reach epiphany for learning to happen. Note (b) does not contradict with EPICAL query complexity analysis since Theorem 1 is a worst case bound that must hold true for all U. To further clarify the role of β, note EPICAL query complexity bound predicts an additive term of O(1/β) on top of the standard CAL query complexities (i.e., both ¯ M and MCAL). This term represents “excess queries” needed to induce epiphany. In Figure 1(c) we plot this excess against 1 β for U = [0.4, 0.6]. Excess is computed as the number of EPICAL queries minus the average number of queries for β = 1. Indeed, we see a near linear relationship between excess queries and 1/β. Finally, as a baseline we compare EPICAL to passive learning. In passive learning x1, x2, . . . are chosen randomly according to µX instead of adaptively. Note passive learning here is also subject to oracle epiphany. That is, the labels yt are produced by the same oracle epiphany model, some of them can be ⊥initially. Our passive learning simply maintains a version space. If it encounters ⊥it does not update the version space. All EPICAL results are better than passive learning. Oracular-EPICAL on OCR. We consider the binary classification task of 5 vs. other digits on MNIST [LeCun et al., 1998]. This allows us to design the unknown regions {Uk} as certain other digits, making the experiments more interpretable. Furthermore, we can control how confusable the U digits are to “5” to observe the influence on oracle epiphany. Although Alg. 2 is efficiently implementable with an ERM routine, it still requires two calls to a supervised learning algorithm on every new example. To scale it up, we implement an approximate version of Alg. 2 that uses online optimization in place of the ERM. More details are in Appendix E. While being efficient in practice, this online algorithm may not retain Alg. 2’s theoretical guarantees. 7 0 1e−4 1e−3 1e−2 1e−1 1 0 2000 4000 6000 8000 10000 12000 14000 β Queries Oracular−EPICAL Passive (a) U =“3” 0 1e−4 1e−3 1e−2 1e−1 1 400 600 800 1000 1200 1400 1600 1800 β Queries Oracular−EPICAL Passive (b) U =“1” Figure 2: Oracular-EPICAL results on OCR. We use epiphany parameters β ∈{1, 10−1, 10−2, 10−3, 10−4, 0}, K = 1, and U is either “3” or “1”. By using β = 1 and β = 0, we include the boundary cases where the oracle is perfect or never has an epiphany. The two different U’s correspond to two contrasting scenarios: “3” is among the “nearest” digits to “5” as measured by the binary classification error between “5” and every other single digit, while “1” is the farthest. The two U’s are about the same size, each covering roughly 10% of the data. More details and experimental results with other choices of U can be found in Appendix E. For each combination of β and U, we perform 100 random trials. In each trial, we run both the online version of Alg. 2 and online passive logistic regression (also subject to oracle epiphany) over a randomly permuted training set of 60, 000 examples, and check the error of the online ERM on the 10, 000 testing examples every 10 queries from 200 up to our query budget of 13, 000. In each trial we record the smallest number of queries for achieving a test error of 4%. Fig. 2(a) and Fig. 2(b) show the median of this number over the 100 random trials, with error bars being the 25th and 75th quantiles. The effect of β on query complexity is dramatic for the near U = “3” but subdued for the far U = “1”. In particular, for U = “3” small β’s force active learning to query as many labels as passive learning. The flattening at 13, 000 at the end means no algorithm could achieve a 4% test error within our query budget. For U = “1”, active learning is always much better than passive regardless of β. Again, this illustrates that both β and U affect the query complexity. As performance references, passive learning on the entire labeled training data achieves a test error of 2.6%, while predicting the majority class (non-5) has a test error of 8.9%. 6 Discussions Our analysis reveals a worst case O(1/β) term in query complexity due to the wait for epiphany, and we hypothesize Ω(K/β) to be the tight lower bound. This immediately raises the question: can we decouple active learning queries from epiphany induction? What if the learner can quickly induce epiphany by showing the oracle a screenful of unlabeled items at a time, without the oracle labeling them? This possibility is hinted in empirical studies. For example, Kulesza et al. [2014] observed epiphanies resulting from seeing items. Then there is a tradeoff between two learner actions toward the oracle: asking a query (getting a label or small contribution toward epiphany), or showing several items (not getting labels but potentially large contribution toward epiphany). One must formalize the cost and benefit of this tradeoff. Of course, real human behaviors are even richer. Epiphanies may be reversible on certain queries, where the oracle begins to have doubts on her previous labeling. Extending our model under more relaxed assumptions is an interesting open question for future research. Acknowledgments This work is supported in part by NSF grants IIS-0953219, IIS-1623605, DGE-1545481, CCF1423237, and by the University of Wisconsin-Madison Graduate School with funding from the Wisconsin Alumni Research Foundation. References Maria-Florina Balcan, Alina Beygelzimer, and John Langford. Agnostic active learning. In Proceedings of the 23rd international conference on Machine learning, pages 65–72. ACM, 2006. 8 Alina Beygelzimer, John Langford, Zhang Tong, and Daniel J Hsu. Agnostic active learning without constraints. In Advances in Neural Information Processing Systems, pages 199–207, 2010. David Cohn, Les Atlas, and Richard Ladner. Improving generalization with active learning. Machine learning, 15(2):201–221, 1994. Sanjoy Dasgupta, Claire Monteleoni, and Daniel J Hsu. A general agnostic active learning algorithm. In Advances in neural information processing systems, pages 353–360, 2007. Pinar Donmez and Jaime G Carbonell. Proactive learning: cost-sensitive active learning with multiple imperfect oracles. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 619–628. ACM, 2008. Ran El-Yaniv and Yair Wiener. On the foundations of noise-free selective classification. The Journal of Machine Learning Research, 11:1605–1641, 2010. Steve Hanneke. Theory of disagreement-based active learning. Foundations and Trends in Machine Learning, 7(2-3):131–309, 2014. Daniel J. Hsu. Algorithms for Active Learning. PhD thesis, University of California at San Diego, 2010. Tzu-Kuo Huang, Alekh Agarwal, Daniel J Hsu, John Langford, and Robert E Schapire. Efficient and parsimonious agnostic active learning. In NIPS, pages 2737–2745, 2015. S. M. Kakade and A. Tewari. On the generalization ability of online strongly convex programming algorithms. In Advances in Neural Information Processing Systems 21, 2009. Nikos Karampatziakis and John Langford. Online importance weight aware updates. In UAI, pages 392–399, 2011. Todd Kulesza, Saleema Amershi, Rich Caruana, Danyel Fisher, and Denis Xavier Charles. Structured labeling for facilitating concept evolution in machine learning. In CHI, pages 3075–3084, 2014. Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Edward Newell and Derek Ruths. How one microtask affects another. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, CHI, pages 3155–3166, 2016. Advait Sarkar, Cecily Morrison, Jonas F Dorn, Rishi Bedi, Saskia Steinheimer, Jacques Boisvert, Jessica Burggraaff, Marcus D’Souza, Peter Kontschieder, Samuel Rota Bulò, et al. Setwise comparison: Consistent, scalable, continuum labels for computer vision. In CHI, 2016. Nihar Bhadresh Shah and Denny Zhou. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In Advances in Neural Information Processing Systems, pages 1–9, 2015. Chicheng Zhang and Kamalika Chaudhuri. Beyond disagreement-based agnostic active learning. In Advances in Neural Information Processing Systems, pages 442–450, 2014. 9 | 2016 | 100 |
5,996 | Stochastic Optimization for Large-scale Optimal Transport Aude Genevay CEREMADE, Université Paris-Dauphine INRIA – Mokaplan project-team genevay@ceremade.dauphine.fr Marco Cuturi CREST, ENSAE Université Paris-Saclay marco.cuturi@ensae.fr Gabriel Peyré CNRS and DMA, École Normale Supérieure INRIA – Mokaplan project-team gabriel.peyre@ens.fr Francis Bach INRIA – Sierra project-team DI, ENS francis.bach@inria.fr Abstract Optimal transport (OT) defines a powerful framework to compare probability distributions in a geometrically faithful way. However, the practical impact of OT is still limited because of its computational burden. We propose a new class of stochastic optimization algorithms to cope with large-scale OT problems. These methods can handle arbitrary distributions (either discrete or continuous) as long as one is able to draw samples from them, which is the typical setup in highdimensional learning problems. This alleviates the need to discretize these densities, while giving access to provably convergent methods that output the correct distance without discretization error. These algorithms rely on two main ideas: (a) the dual OT problem can be re-cast as the maximization of an expectation; (b) the entropic regularization of the primal OT problem yields a smooth dual optimization which can be addressed with algorithms that have a provably faster convergence. We instantiate these ideas in three different setups: (i) when comparing a discrete distribution to another, we show that incremental stochastic optimization schemes can beat Sinkhorn’s algorithm, the current state-of-the-art finite dimensional OT solver; (ii) when comparing a discrete distribution to a continuous density, a semidiscrete reformulation of the dual program is amenable to averaged stochastic gradient descent, leading to better performance than approximately solving the problem by discretization ; (iii) when dealing with two continuous densities, we propose a stochastic gradient descent over a reproducing kernel Hilbert space (RKHS). This is currently the only known method to solve this problem, apart from computing OT on finite samples. We backup these claims on a set of discrete, semi-discrete and continuous benchmark problems. 1 Introduction Many problems in computational sciences require to compare probability measures or histograms. As a set of representative examples, let us quote: bag-of-visual-words comparison in computer vision [17], color and shape processing in computer graphics [21], bag-of-words for natural language processing [11] and multi-label classification [9]. In all of these problems, a geometry between the features (words, visual words, labels) is usually known, and can be leveraged to compare probability distributions in a geometrically faithful way. This underlying geometry might be for instance the planar Euclidean domain for 2-D shapes, a perceptual 3D color metric space for image processing or a high-dimensional semantic embedding for words. Optimal transport (OT) [24] is the canonical 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. way to automatically lift this geometry to define a metric for probability distributions. That metric is known as the Wasserstein or earth mover’s distance. As an illustrative example, OT can use a metric between words to build a metric between documents that are represented as frequency histograms of words (see [11] for details). All the above-cited lines of work advocate, among others, that OT is the natural choice to solve these problems, and that it leads to performance improvement when compared to geometrically-oblivious distances such as the Euclidean or χ2 distances or the Kullback-Leibler divergence. However, these advantages come at the price of an enormous computational overhead. This is especially true because current OT solvers require to sample beforehand these distributions on a pre-defined set of points, or on a grid. This is both inefficient (in term of storage and speed) and counter-intuitive. Indeed, most high-dimensional computational scenarios naturally represent distributions as objects from which one can sample, not as density functions to be discretized. Our goal is to alleviate these shortcomings. We propose a class of provably convergent stochastic optimization schemes that can handle both discrete and continuous distributions through sampling. Previous works. The prevalent way to compute OT distances is by solving the so-called Kantorovitch problem [10] (see Section 2 for a short primer on the basics of OT formulations), which boils down to a large-scale linear program when dealing with discrete distributions (i.e., finite weighted sums of Dirac masses). This linear program can be solved using network flow solvers, which can be further refined to assignment problems when comparing measures of the same size with uniform weights [3]. Recently, regularized approaches that solve the OT with an entropic penalization [6] have been shown to be extremely efficient to approximate OT solutions at a very low computational cost. These regularized approaches have supported recent applications of OT to computer graphics [21] and machine learning [9]. These methods apply the celebrated Sinkhorn algorithm [20], and can be extended to solve more exotic transportation-related problems such as the computation of barycenters [21]. Their chief computational advantage over competing solvers is that each iteration boils down to matrix-vector multiplications, which can be easily parallelized, streams extremely well on GPU, and enjoys linear-time implementation on regular grids or triangulated domains [21]. These methods are however purely discrete and cannot cope with continuous densities. The only known class of methods that can overcome this limitation are so-called semi-discrete solvers [1], that can be implemented efficiently using computational geometry primitives [12]. They can compute distance between a discrete distribution and a continuous density. Nonetheless, they are restricted to the Euclidean squared cost, and can only be implemented in low dimensions (2-D and 3-D). Solving these semi-discrete problems efficiently could have a significant impact for applications to density fitting with an OT loss [2] for machine learning applications, see [13]. Lastly, let us point out that there is currently no method that can compute OT distances between two continuous densities, which is thus an open problem we tackle in this article. Contributions. This paper introduces stochastic optimization methods to compute large-scale optimal transport in all three possible settings: discrete OT, to compare a discrete vs. another discrete measure; semi-discrete OT, to compare a discrete vs. a continuous measure; and continous OT, to compare a continuous vs. another continuous measure. These methods can be used to solve classical OT problems, but they enjoy faster convergence properties when considering their entropic-regularized versions. We show that the discrete regularized OT problem can be tackled using incremental algorithms, and we consider in particular the stochastic averaged gradient (SAG) method [19]. Each iteration of that algorithm requires N operations (N being the size of the supports of the input distributions), which makes it scale better in large-scale problems than the state-of-the-art Sinkhorn algorithm, while still enjoying a convergence rate of O(1/k), k being the number of iterations. We show that the semi-discrete OT problem can be solved using averaged stochastic gradient descent (SGD), whose convergence rate is O(1/ √ k). This approach is numerically advantageous over the brute force approach consisting in sampling first the continuous density to solve next a discrete OT problem. Lastly, for continuous optimal transport, we propose a novel method which makes use of an expansion of the dual variables in a reproducing kernel Hilbert space (RKHS). This allows us for the first time to compute with a converging algorithm OT distances between two arbitrary densities, under the assumption that the two potentials belong to such an RKHS. Notations. In the following we consider two metric spaces X and Y. We denote by M1 +(X) the set of positive Radon probability measures on X, and C(X) the space of continuous functions on X. Let µ ∈M1 +(X), ν ∈M1 +(Y), we define Π(µ, ν) def. = π ∈M1 +(X × Y) ; ∀(A, B) ⊂X × Y, π(A × Y) = µ(A), π(X × B) = ν(B) , 2 the set of joint probability measures on X × Y with marginals µ and ν. The Kullback-Leibler divergence between joint probabilities is defined as ∀(π, ξ) ∈M1 +(X × Y)2, KL(π|ξ) def. = R X×Y log dπ dξ (x, y) −1 dξ(x, y), where we denote dπ dξ the relative density of π with respect to ξ, and by convention KL(π|ξ) def. = +∞ if π does not have a density with respect to ξ. The Dirac measure at point x is δx. For a set C, ιC(x) = 0 if x ∈C and ιC(x) = +∞otherwise. The probability simplex of N bins is ΣN = µ ∈RN + ; P i µi = 1 . Element-wise multiplication of vectors is denoted by ⊙and K⊤denotes the transpose of a matrix K. We denote 1N = (1, . . . , 1)⊤∈RN and 0N = (0, . . . , 0)⊤∈RN. 2 Optimal Transport: Primal, Dual and Semi-dual Formulations We consider the optimal transport problem between two measures µ ∈M1 +(X) and ν ∈M1 +(Y), defined on metric spaces X and Y. No particular assumption is made on the form of µ and ν, we only assume that they both can be sampled from to be able to apply our algorithms. Primal, Dual and Semi-dual Formulations. The Kantorovich formulation [10] of OT and its entropic regularization [6] can be conveniently written in a single convex optimization problem as follows ∀(µ, ν) ∈M1 +(X)×M1 +(Y), Wε(µ, ν) def. = min π∈Π(µ,ν) Z X×Y c(x, y)dπ(x, y)+ε KL(π|µ⊗ν). (Pε) Here c ∈C(X × Y) and c(x, y) should be interpreted as the “ground cost” to move a unit of mass from x to y. This c is typically application-dependent, and reflects some prior knowledge on the data to process. We refer to the introduction for a list of previous work where various examples (in imaging, vision, graphics or machine learning) of such costs are given. When X = Y, ε = 0 and c = dp for p ≥1, where d is a distance on X, then W0(µ, ν) 1 p is known as the p-Wasserstein distance on M1 +(X). Note that this definition can be used for any type of measure, both discrete and continuous. When ε > 0, problem (Pε) is strongly convex, so that the optimal π is unique, and algebraic properties of the KL regularization result in computations that can be tackled using the Sinkhorn algorithm [6]. For any c ∈C(X × Y), we define the following constraint set Uc def. = {(u, v) ∈C(X) × C(Y) ; ∀(x, y) ∈X × Y, u(x) + v(y) ≤c(x, y)} , and define its indicator function as well as its “smoothed” approximation ιε Uc(u, v) def. = ιUc(u, v) if ε = 0, ε R X×Y exp( u(x)+v(y)−c(x,y) ε )dµ(x)dν(y) if ε > 0. (1) For any v ∈C(Y), we define its c-transform and its “smoothed” approximation ∀x ∈X, vc,ε(x) def. = min y∈Y c(x, y) −v(y) if ε = 0, −ε log R Y exp( v(y)−c(x,y) ε )dν(y) if ε > 0. (2) The proposition below describes two dual problems. It is central to our analysis and paves the way for the application of stochastic optimization methods. Proposition 2.1 (Dual and semi-dual formulations). For ε ≥0, one has Wε(µ, ν) = max u∈C(X),v∈C(Y) Fε(u, v) def. = Z X u(x)dµ(x) + Z Y v(y)dν(y) −ιε Uc(u, v), (Dε) = max v∈C(Y) Hε(v) def. = Z X vc,ε(x)dµ(x) + Z Y v(y)dν(y) −ε, (Sε) where ιε Uc is defined in (1) and vc,ε in (2). Furthermore, u solving (Dε) is recovered from an optimal v solving (Sε) as u = vc,ε. For ε > 0, the solution π of (Pε) is recovered from any (u, v) solving (Dε) as dπ(x, y) = exp( u(x)+v(y)−c(x,y) ε )dµ(x)dν(y). 3 Proof. Problem (Dε) is the convex dual of (Pε), and is derived using Fenchel-Rockafellar’s theorem. The relation between u and v is obtained by writing the first order optimality condition for v in (Dε). Plugging this expression back in (Dε) yields (Sε). Problem (Pε) is called the primal while (Dε) is its associated dual problem. We refer to (Sε) as the “semi-dual” problem, because in the special case ε = 0, (Sε) boils down to the so-called semi-discrete OT problem [1]. Both dual problems are concave maximization problems. The optimal dual variables (u, v)—known as Kantorovitch potentials—are not unique, since for any solution (u, v) of (Dε), (u + λ, v −λ) is also a solution for any λ ∈R. When ε > 0, they can be shown to be unique up to this scalar translation [6]. We refer to the supplementary material for a discussion (and proofs) of the convergence of the solutions of (Pε), (Dε) and (Sε) towards those of (P0), (D0) and (S0) as ε →0. A key advantage of (Sε) over (Dε) is that, when ν is a discrete density (but not necessarily µ), then (Sε) is a finite-dimensional concave maximization problem, which can thus be solved using stochastic programming techniques, as highlighted in Section 4. By contrast, when both µ and ν are continuous densities, these dual problems are intrinsically infinite dimensional, and we propose in Section 5 more advanced techniques based on RKHSs. Stochastic Optimization Formulations. The fundamental property needed to apply stochastic programming is that both dual problems (Dε) and (Sε) must be rephrased as maximizing expectations: ∀ε > 0, Fε(u, v) = EX,Y [fε(X, Y, u, v)] and ∀ε ≥0, Hε(v) = EX [hε(X, v)] , (3) where the random variables X and Y are independent and distributed according to µ and ν respectively, and where, for (x, y) ∈X × Y and (u, v) ∈C(X) × C(Y), ∀ε > 0, fε(x, y, u, v) def. = u(x) + v(y) −ε exp u(x) + v(y) −c(x, y) ε , ∀ε ≥0, hε(x, v) def. = Z Y v(y)dν(y) + vc,ε(x) −ε. This reformulation is at the heart of the methods detailed in the remainder of this article. Note that the dual problem (Dε) cannot be cast as an unconstrained expectation maximization problem when ε = 0, because of the constraint on the potentials which arises in that case. When ν is discrete, i.e ν = PJ j=1 νjδyj the potential v is a J-dimensional vector (vj)j={1...J} and we can compute the gradient of hε. When ε > 0 the gradient reads ∇vhε(v, x) = ν −π(x) and the hessian is given by ∂2 vhε(v, x) = 1 ε(π(x)π(x)T −diag(π(x))) where π(x)i = exp( vi−c(x,yi) ε ) PJ j=1 exp( vj−c(x,yj) ε ) −1 . The eigenvalues of the hessian can be upper bounded by 1 ε, which guarantees a lipschitz gradient, and lower bounded by 0 which does not ensure strong convexity. In the unregularized case, h0 is not smooth and a subgradient is given by ∇vh0(v, x) = ν −˜π(x), where ˜π(x)i = χi=j⋆and j⋆= arg minj∈{1...J} c(x, yj) −vj (when several elements are in the argmin, we arbitrarily choose one of them to be j⋆). We insist on the lack of strong convexity of the semi-dual problem, as it impacts the convergence properties of the stochastic algorithms (stochastic averaged gradient and stochastic gradient descent) described below. 3 Discrete Optimal Transport We assume in this section that both µ and ν are discrete measures, i.e. finite sums of Diracs, of the form µ = PI i=1 µiδxi and ν = PJ j=1 νjδyj, where (xi)i ⊂X and (yj)j ⊂Y, and the histogram vector weights are µ ∈ΣI and ν ∈ΣJ. These discrete measures may come from the evaluation of continuous densities on a grid, counting features in a structured object, or be empirical measures based on samples. This setting is relevant for several applications, including all known applications of the earth mover’s distance. We show in this section that our stochastic formulation can prove extremely efficient to compare measures with a large number of points. Discrete Optimization and Sinkhorn. In this setup, the primal (Pε), dual (Dε) and semi-dual (Sε) problems can be rewritten as finite-dimensional optimization problems involving the cost matrix 4 c ∈RI×J + defined by ci,j = c(xi, yj): Wε(µ, ν) = min π∈RI×J + n P i,j ci,jπi,j + ε P i,j log πi,j µiνj −1 πi,j ; π1J = µ, π⊤1I = ν o , ( ¯Pε) = max u∈RI,v∈RJ P i uiµi + P j vjνj −ε P i,j exp ui+vj−ci,j ε µiνj, (for ε > 0) ( ¯Dε) = max v∈RJ ¯Hε(v) = P i∈I ¯hε(xi, v)µi, where ( ¯Sε) ¯hε(x, v) = X j∈J vjνj + ( −ε log(P j∈J exp( vj−c(x,yj) ε )νj) −ε if ε > 0, minj (c(x, yj) −vj) if ε = 0, (4) The state-of-the-art method to solve the discrete regularized OT (i.e. when ε > 0) is Sinkhorn’s algorithm [6, Alg.1], which has linear convergence rate [8]. It corresponds to a block coordinate maximization, successively optimizing ( ¯Dε) with respect to either u or v. Each iteration of this algorithm is however costly, because it requires a matrix-vector multiplication. Indeed, this corresponds to a “batch” method where all the samples (xi)i and (yj)j are used at each iteration, which has thus complexity O(N 2) where N = max(I, J). We now detail how to alleviate this issue using online stochastic optimization methods. Incremental Discrete Optimization when ε > 0. Stochastic gradient descent (SGD), in which an index k is drawn from distribution µ at each iteration can be used to minimize the finite sum that appears in in ¯Sε. The gradient of that term ¯hε(xk, ·) can be used as a proxy for the full gradient in a standard gradient ascent step to maximize ¯Hε. Algorithm 1 SAG for Discrete OT Input: C Output: v v ←0J, d ←0J, ∀i, gi ←0J for k = 1, 2, . . . do Sample i ∈{1, 2, . . . , I} uniform. d ←d −gi gi ←µi∇v¯hε(xi, v) d ←d + gi ; v ←v + Cd end for When ε > 0, the finite sum appearing in ( ¯Sε) suggests to use incremental gradient methods—rather than purely stochastic ones—which are known to converge faster than SGD. We propose to use the stochastic averaged gradient (SAG) [19]. As SGD, SAG operates at each iteration by sampling a point xk from µ, to compute the gradient corresponding to that sample for the current estimate v. Unlike SGD, SAG keeps in memory a copy of that gradient. Another difference is that SAG applies a fixed length update, in the direction of the average of all gradients stored so far, which provides a better proxy of the gradient corresponding to the entire sum. This improves the convergence rate to | ¯Hε(v⋆ ε) −¯Hε(vk)| = O(1/k), where v⋆ ε is a minimizer of ¯Hε, at the expense of storing the gradient for each of the I points. This expense can be mitigated by considering mini-batches instead of individual points. Note that the SAG algorithm is adaptive to strong-convexity and will be linearly convergent around the optimum. The pseudo-code for SAG is provided in Algorithm 1, and we defer more details on SGD for Section 4, in which it will be shown to play a crucial role. Note that the Lipschitz constant of all these terms is upperbounded by L = maxi µi/ε. Numerical Illustrations on Bags of Word-Embeddings. Comparing texts using a Wasserstein distance on their representations as clouds of word embeddings has been recently shown to yield state-of-the-art accuracy for text classification [11]. The authors of [11] have however highlighted that this accuracy comes at a large computational cost. We test our stochastic approach to discrete OT in this scenario, using the complete works of 35 authors (names in supplementary material). We use Glove word embeddings [14] to represent words, namely X = Y = R300. We discard all most frequent 1, 000 words that appear at the top of the file glove.840B.300d provided on the authors’ website. We sample N = 20, 000 words (found within the remaining huge dictionary of relatively rare words) from each authors’ complete work. Each author is thus represented as a cloud of 20, 000 points in R300. The cost function c between the word embeddings is the squared-Euclidean distance, re-scaled so that it has a unit empirical median on 2, 000 points sampled randomly among all vector embeddings. We set ε to 0.01 (other values are considered in the supplementary material). We compute all (35 × 34/2 = 595) pairwise regularized Wasserstein distances using both the Sinkhorn algorithm and SAG. Following the recommendations in [19], SAG’s stepsize is tested for 3 different settings, 1/L, 3/L and 5/L. The convergence of each algorithm is measured by computing the ℓ1 norm of the gradient of the full sum (which also corresponds to the marginal violation of the primal transport solution that can be recovered with these dual variables[6]), as well as the ℓ2 norm of the 5 Figure 1: We compute all 595 pairwise word mover’s distances [11] between 35 very large corpora of text, each represented as a cloud of I = 20, 000 word embeddings. We compare the Sinkhorn algorithm with SAG, tuned with different stepsizes. Each pass corresponds to a I × I matrix-vector product. We used minibatches of size 200 for SAG. Left plot: convergence of the gradient ℓ1 norm (average and ± standard deviation error bars). A stepsize of 3/L achieves a substantial speed-up of ≈2.5, as illustrated in the boxplots in the center plot. Convergence to v⋆(the best dual variable across all variables after 4, 000 passes) in ℓ2 norm is given in the right plot, up to 2, 000 ≈211 steps. deviation to the optimal scaling found after 4, 000 passes for any of the three methods. Results are presented in Fig. 1 and suggest that SAG can be more than twice faster than Sinkhorn on average for all tolerance thresholds. Note that SAG retains exactly the same parallel properties as Sinkhorn: all of these computations can be streamlined on GPUs. We used 4 Tesla K80 cards to compute both SAG and Sinkhorn results. For each computation, all 4, 000 passes take less than 3 minutes (far less are needed if the goal is only to approximate the Wasserstein distance itself, as proposed in [11]). 4 Semi-Discrete Optimal Transport In this section, we assume that µ is an arbitrary measure (in particular, it needs not to be discrete) and that ν = PJ j=1 νjδyj is a discrete measure. This corresponds to the semi-discrete OT problem [1, 12]. The semi-dual problem (Sε) is then a finite-dimensional maximization problem, written in expectation form as Wε(µ, ν) = max v∈RJ EX ¯hε(X, v) where X ∼µ and ¯hε is defined in (4). Algorithm 2 Averaged SGD for Semi-Discrete OT Input: C Output: v ˜v ←0J , v ←˜v for k = 1, 2, . . . do Sample xk from µ ˜v ←˜v + C √ k∇v¯hε(xk, ˜v) v ←1 k ˜v + k−1 k v end for Stochastic Semi-discrete Optimization. Since the expectation is taken over an arbitrary measure, neither Sinkhorn algorithm nor incremental algorithms such as SAG can be used. An alternative is to approximate µ by an empirical measure ˆµN def. = 1 N PN i=1 δxi where (xi)i=1,...,N are i.i.d samples from µ, and computing Wε(ˆµN, ν) using the discrete methods (Sinkhorn or SAG) detailed in Section 3. However this introduces a discretization noise in the solution as the discrete problem is now different from the original one and thus has a different solution. Averaged SGD on the other hand does not require µ to be discrete and is thus perfectly adapted to this semi-discrete setting. The algorithm is detailed in Algorithm 2 (the expression for ∇¯hε being given in Equation 4). The convergence rate is O(1/ √ k) thanks to averaging ˜vk [15]. Numerical Illustrations. Simulations are performed in X = Y = R3. Here µ is a Gaussian mixture (continuous density) and ν = 1 J PJ j=1 δyj with J = 10 and (xj)j are i.i.d. samples from another gaussian mixture. Each mixture is composed of three gaussians whose means are drawn randomly in [0, 1]3, and their correlation matrices are constructed as Σ = 0.01(RT + R) + 3I3 where R is 3 × 3 with random entries in [0, 1]. In the following, we denote v⋆ ε a solution of (Sε), which is approximated by running SGD for 107 iterations, 100 times more than those plotted, to ensure reliable convergence curves. Both plots are averaged over 50 runs, lighter lines show the variability in a single run. 6 (a) SGD (b) SGD vs. SAG Figure 2: (a) Plot of ∥vk −v⋆ 0∥2 / ∥v⋆ 0∥2 as a function of k, for SGD and different values of ε (ε = 0 being un-regularized). (b) Plot of ∥vk −v⋆ ε∥2 / ∥v⋆ ε∥2 as a function of k, for SGD and SAG with different number N of samples, for regularized OT using ε = 10−2. Figure 2 (a) shows the evolution of ∥vk −v⋆ 0∥2 / ∥v⋆ 0∥2 as a function of k. It highlights the influence of the regularization parameters ε on the iterates of SGD. While the regularized iterates converge faster, they do not converge to the correct unregularized solution. This figure also illustrates the convergence theorem of solution of (Sε) toward those (S0) when ε →0, which can be found in the supplementary material. Figure 2 (b) shows the evolution of ∥vk −v⋆ ε∥2 / ∥v⋆ ε∥2 as a function of k, for a fixed regularization parameter value ε = 10−2. It compares SGD to SAG using different numbers N of samples for the empirical measures ˆµN. While SGD converges to the true solution of the semi-discrete problem, the solution computed by SAG is biased because of the approximation error which comes from the discretization of µ. This error decreases when the sample size N is increased, as the approximation of µ by ˆµN becomes more accurate. 5 Continuous optimal transport using RKHS In the case where neither µ nor ν are discrete, problem (Sε) is infinite-dimensional, so it cannot be solved directly using SGD. We propose in this section to solve the initial dual problem (Dε), using expansions of the dual variables in two reproducing kernel Hilbert spaces (RKHS). Choosing dual variables (or test functions) in a RKHS is the fundamental assumption underlying the Maximum Mean Discrepancy (MMD)[22]. It is thus tempting to draw parallels between the approach in this section and the MMD. The two methods do not, however, share much beyond using RKHSs. Indeed, unlike the MMD, problem (Dε) involves two different dual (test) functions u and v, one for each measure; these are furthermore linked through a regularizer ιε Uc. Recall finally that contrarily to the semi-discrete setting, we can only solve the regularized problem here (i.e. ε > 0), since (Dε) cannot be cast as an expectation maximization problem when ε = 0. Stochastic Continuous Optimization. We consider two RKHS H and G defined on X and on Y, with kernels κ and ℓ, associated with norms ∥· ∥H and ∥· ∥G. Recall the two main properties of RKHS: (a) if u ∈H, then u(x) = ⟨u, κ(·, x)⟩H and (b) κ(x, x′) = ⟨κ(·, x), κ(·, x′)⟩H. The dual problem (Dε) is conveniently re-written in (3) as the maximization of the expectation of f ε(X, Y, u, v) with respect to the random variables (X, Y ) ∼µ ⊗ν. The SGD algorithm applied to this problem reads, starting with u0 = 0 and v0 = 0, (uk, vk) def. = (uk−1, vk−1) + C √ k ∇fε(xk, yk, uk−1, vk−1) ∈H × G, (5) where (xk, yk) are i.i.d. samples from µ ⊗ν. The following proposition shows that these (uk, vk) iterates can be expressed as finite sums of kernel functions, with a simple recursion formula. Proposition 5.1. The iterates (uk, vk) defined in (5) satisfy (uk, vk) def. = k X i=1 αi(κ(·, xi), ℓ(·, yi)), where αi def. = ΠBr C √ i 1 −e ui−1(xi)+vi−1(yi)−c(xi,yi) ε , (6) where (xi, yi)i=1...k are i.i.d samples from µ ⊗ν and ΠBr is the projection on the centered ball of radius r. If the solutions of (Dε) are in the H × G and if r is large enough, the iterates (uk,vk) converge to a solution of (Dε). The proof of proposition 5.1 can be found in the supplementary material. 7 (a) setting (b) convergence of uk (c) plots of uk Figure 3: (a) Plot of dµ dx and dν dx. (b) Plot of ∥uk −ˆu⋆∥2 / ∥ˆu⋆∥2 as a function of k with SGD in the RKHS, for regularized OT using ε = 10−1. (c) Plot of the iterates uk for k = 103, 104, 105 and the proxy for the true potential ˆu⋆, evaluated on a grid where µ has non negligible mass. Algorithm 3 Kernel SGD for continuous OT Input: C, kernels κ and ℓ Output: (αk, xk, yk)k=1,... for k = 1, 2, . . . do Sample xk from µ Sample yk from ν uk−1(xk) def. = Pk−1 i=1 αiκ(xk, xi) vk−1(yk) def. = Pk−1 i=1 αiℓ(yk, yi) αk def. = C √ k 1 −e uk−1(xk)+vk−1(yk)−c(xk,yk) ε end for Algorithm 3 describes our kernel SGD approach, in which both potentials u and v are approximated by a linear combination of kernel functions. The main cost lies in the computation of the terms uk−1(xk) and vk−1(yk) which imply a quadratic complexity O(k2). Several methods exist to alleviate the running time complexity of kernel algorithms, e.g. random Fourier features [16] or incremental incomplete Cholesky decomposition [25]. Kernels that are associated with dense RHKS are called universal [23] and can approach any arbitrary potential. In Euclidean spaces X = Y = Rd, where d > 0, a natural choice of universal kernel is the kernel defined by κ(x, x′) = exp(−∥x − x′∥2/σ2). Tuning its bandwidth σ is crucial to obtain a good convergence of the algorithm. Finally, let us note that, while entropy regularization of the primal problem (Pε) was instrumental to be able to apply semi-discrete methods in Sections 3 and 4, this is not the case here. Indeed, since the kernel SGD algorithm is applied to the dual (Dε), it is possible to replace KL(π|µ ⊗ν) appearing in (Pε) by other regularizing divergences. A typical example would be a χ2 divergence R X×Y( dπ dµdν (x, y))2dµ(x)dν(y) (with positivity constraints on π). Numerical Illustrations. We consider optimal transport in 1D between a Gaussian µ and a Gaussian mixture ν whose densities are represented in Figure 3 (a). Since there is no existing benchmark for continuous transport, we use the solution of the semi-discrete problem Wε(µ, ˆνN) with N = 103 computed with SGD as a proxy for the solution and we denote it by ˆu⋆. We focus on the convergence of the potential u, as it is continuous in both problems contrarily to v. Figure 3 (b) represents the plot of ∥uk −ˆu⋆∥2/∥ˆu⋆∥2 where u is the evaluation of u on a sample (xi)i=1...N ′ drawn from µ. This gives more emphasis to the norm on points where µ has more mass. The convergence is rather slow but still noticeable. The iterates uk are plotted on a grid for different values of k in Figure 3 (c), to emphasize the convergence to the proxy ˆu⋆. We can see that the iterates computed with the RKHS converge faster where µ has more mass, which is actually where the value of u has the greatest impact in Fε (u being integrated against µ). Conclusion We have shown in this work that the computations behind (regularized) optimal transport can be considerably alleviated, or simply enabled, using a stochastic optimization approach. In the discrete case, we have shown that incremental gradient methods can surpass the Sinkhorn algorithm in terms of efficiency, taken for granted that the (constant) stepsize has been correctly selected, which should be possible in practical applications. We have also proposed the first known methods that can address the challenging semi-discrete and continuous cases. All of these three settings can open new perspectives for the application of OT to high-dimensional problems. Acknowledgements GP was supported by the European Research Council (ERC SIGMA-Vision); AG by Région Ile-de-France; MC by JSPS grant 26700002. 8 References [1] F. Aurenhammer, F. Hoffmann, and B. Aronov. Minkowski-type theorems and least-squares clustering. Algorithmica, 20(1):61–76, 1998. [2] F. Bassetti, A. Bodini, and E. Regazzini. On minimum Kantorovich distance estimators. Statistics & Probability Letters, 76(12):1298–1302, 2006. [3] R. Burkard, M. Dell’Amico, and S. Martello. Assignment Problems. SIAM, 2009. [4] G. Carlier, V. Duval, G. Peyré, and B. Schmitzer. Convergence of entropic schemes for optimal transport and gradient flows. arXiv preprint arXiv:1512.02783, 2015. [5] R. Cominetti and J. San Martin. Asymptotic analysis of the exponential penalty trajectory in linear programming. Mathematical Programming, 67(1-3):169–187, 1994. [6] M. Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. In Adv. in Neural Information Processing Systems, pages 2292–2300, 2013. [7] A. Dieuleveut and F. Bach. Non-parametric stochastic approximation with large step sizes. arXiv preprint arXiv:1408.0361, 2014. [8] J. Franklin and J. Lorenz. On the scaling of multidimensional matrices. Linear Algebra and its applications, 114:717–735, 1989. [9] C. Frogner, C. Zhang, H. Mobahi, M. Araya, and T. Poggio. Learning with a Wasserstein loss. In Adv. in Neural Information Processing Systems, pages 2044–2052, 2015. [10] L. Kantorovich. On the transfer of masses (in russian). Doklady Akademii Nauk, 37(2):227–229, 1942. [11] M. J. Kusner, Y. Sun, N. I. Kolkin, and K. Q. Weinberger. From word embeddings to document distances. In ICML, 2015. [12] Q. Mérigot. A multiscale approach to optimal transport. Comput. Graph. Forum, 30(5):1583–1592, 2011. [13] G. Montavon, K.-R. Müller, and M. Cuturi. Wasserstein training of restricted Boltzmann machines. In Adv. in Neural Information Processing Systems, 2016. [14] J. Pennington, R. Socher, and C.D. Manning. Glove: Global vectors for word representation. Proc. of the Empirical Methods in Natural Language Processing (EMNLP 2014), 12:1532–1543, 2014. [15] B. T Polyak and A. B Juditsky. Acceleration of stochastic approximation by averaging. SIAM Journal on Control and Optimization, 30(4):838–855, 1992. [16] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In Adv. in Neural Information Processing Systems, pages 1177–1184, 2007. [17] Y. Rubner, C. Tomasi, and L. J. Guibas. The earth mover’s distance as a metric for image retrieval. IJCV, 40(2):99–121, November 2000. [18] F. Santambrogio. Optimal transport for applied mathematicians. Birkäuser, NY, 2015. [19] M. Schmidt, N. Le Roux, and F. Bach. Minimizing finite sums with the stochastic average gradient. Mathematical Programming, 2016. [20] R. Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. Ann. Math. Statist., 35:876–879, 1964. [21] J. Solomon, F. de Goes, G. Peyré, M. Cuturi, A. Butscher, A. Nguyen, T. Du, and L. Guibas. Convolutional Wasserstein distances: Efficient optimal transportation on geometric domains. ACM Transactions on Graphics (SIGGRAPH), 34(4):66:1–66:11, 2015. [22] Bharath K Sriperumbudur, Kenji Fukumizu, Arthur Gretton, Bernhard Schölkopf, Gert RG Lanckriet, et al. On the empirical estimation of integral probability metrics. Electronic Journal of Statistics, 6:1550–1599, 2012. [23] I. Steinwart and A. Christmann. Support vector machines. Springer Science & Business Media, 2008. [24] C. Villani. Topics in Optimal Transportation. Graduate studies in Math. AMS, 2003. [25] G. Wu, E. Chang, Y. K. Chen, and C. Hughes. Incremental approximate matrix factorization for speeding up support vector machines. In Proc. of the 12th ACM SIGKDD Intern. Conf. on Knowledge Discovery and Data Mining, pages 760–766, 2006. 9 | 2016 | 101 |
5,997 | The Sound of APALM Clapping: Faster Nonsmooth Nonconvex Optimization with Stochastic Asynchronous PALM Damek Davis and Madeleine Udell Cornell University {dsd95,mru8}@cornell.edu Brent Edmunds University of California, Los Angeles brent.edmunds@math.ucla.edu Abstract We introduce the Stochastic Asynchronous Proximal Alternating Linearized Minimization (SAPALM) method, a block coordinate stochastic proximal-gradient method for solving nonconvex, nonsmooth optimization problems. SAPALM is the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We prove that SAPALM matches the best known rates of convergence — among synchronous or asynchronous methods — on this problem class. We provide upper bounds on the number of workers for which we can expect to see a linear speedup, which match the best bounds known for less complex problems, and show that in practice SAPALM achieves this linear speedup. We demonstrate state-of-the-art performance on several matrix factorization problems. 1 Introduction Parallel optimization algorithms often feature synchronization steps: all processors wait for the last to finish before moving on to the next major iteration. Unfortunately, the distribution of finish times is heavy tailed. Hence as the number of processors increases, most processors waste most of their time waiting. A natural solution is to remove any synchronization steps: instead, allow each idle processor to update the global state of the algorithm and continue, ignoring read and write conflicts whenever they occur. Occasionally one processor will erase the work of another; the hope is that the gain from allowing processors to work at their own paces offsets the loss from a sloppy division of labor. These asynchronous parallel optimization methods can work quite well in practice, but it is difficult to tune their parameters: lock-free code is notoriously hard to debug. For these problems, there is nothing as practical as a good theory, which might explain how to set these parameters so as to guarantee convergence. In this paper, we propose a theoretical framework guaranteeing convergence of a class of asynchronous algorithms for problems of the form minimize (x1,...,xm)∈H1×...×Hm f(x1, . . . , xm) + m X j=1 rj(xj), (1) where f is a continuously differentiable (C1) function with an L-Lipschitz gradient, each rj is a lower semicontinuous (not necessarily convex or differentiable) function, and the sets Hj are Euclidean spaces (i.e., Hj = Rnj for some nj ∈N). This problem class includes many (convex and nonconvex) signal recovery problems, matrix factorization problems, and, more generally, any generalized low rank model [20]. Following terminology from these domains, we view f as a loss function and each rj as a regularizer. For example, f might encode the misfit between the observations and the model, while the regularizers rj encode structural constraints on the model such as sparsity or nonnegativity. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. Many synchronous parallel algorithms have been proposed to solve (1), including stochastic proximalgradient and block coordinate descent methods [22, 3]. Our asynchronous variants build on these synchronous methods, and in particular on proximal alternating linearized minimization (PALM) [3]. These asynchronous variants depend on the same parameters as the synchronous methods, such as a step size parameter, but also new ones, such as the maximum allowable delay. Our contribution here is to provide a convergence theory to guide the choice of those parameters within our control (such as the stepsize) in light of those out of our control (such as the maximum delay) to ensure convergence at the rate guaranteed by theory. We call this algorithm the Stochastic Asynchronous Proximal Alternating Linearized Minimization method, or SAPALM for short. Lock-free optimization is not a new idea. Many of the first theoretical results for such algorithms appear in the textbook [2], written over a generation ago. But within the last few years, asynchronous stochastic gradient and block coordinate methods have become newly popular, and enthusiasm in practice has been matched by progress in theory. Guaranteed convergence for these algorithms has been established for convex problems; see, for example, [13, 15, 16, 12, 11, 4, 1]. Asynchrony has also been used to speed up algorithms for nonconvex optimization, in particular, for learning deep neural networks [6] and completing low-rank matrices [23]. In contrast to the convex case, the existing asynchronous convergence theory for nonconvex problems is limited to the following four scenarios: stochastic gradient methods for smooth unconstrained problems [19, 10]; block coordinate methods for smooth problems with separable, convex constraints [18]; block coordinate methods for the general problem (1) [5]; and deterministic distributed proximal-gradient methods for smooth nonconvex loss functions with a single nonsmooth, convex regularizer [9]. A general block-coordinate stochastic gradient method with nonsmooth, nonconvex regularizers is still missing from the theory. We aim to fill this gap. Contributions. We introduce SAPALM, the first asynchronous parallel optimization method that provably converges for all nonconvex, nonsmooth problems of the form (1). SAPALM is a a block coordinate stochastic proximal-gradient method that generalizes the deterministic PALM method of [5, 3]. When applied to problem (1), we prove that SAPALM matches the best, known rates of convergence, due to [8] in the case where each rj is convex and m = 1: that is, asynchrony carries no theoretical penalty for convergence speed. We test SAPALM on a few example problems and compare to a synchronous implementation, showing a linear speedup. Notation. Let m ∈N denote the number of coordinate blocks. We let H = H1 × . . . × Hm. For every x ∈H, each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj →Hj is Lj-Lipschitz continuous; we let L = minj{Lj} ≤maxj{Lj} = L. The number τ ∈N is the maximum allowable delay. Define the aggregate regularizer r : H →(−∞, ∞] as r(x) = Pm j=1 rj(xj). For each j ∈{1, . . . , m}, y ∈Hj, and γ > 0, define the proximal operator proxγrj(y) := argmin xj∈Hj rj(xj) + 1 2γ ∥xj −y∥2 For convex rj, proxγrj(y) is uniquely defined, but for nonconvex problems, it is, in general, a set. We make the mild assumption that for all y ∈Hj, we have proxγrj(y) ̸= ∅. A slight technicality arises from our ability to choose among multiple elements of proxγrj(y), especially in light of the stochastic nature of SAPALM. Thus, for all y, j and γ > 0, we fix an element ζj(y, γ) ∈proxγrj(y). (2) By [17, Exercise 14.38], we can assume that ζj is measurable, which enables us to reason with expectations wherever they involve ζj. As shorthand, we use proxγrj(y) to denote the (unique) choice ζj(y, γ). For any random variable or vector X, we let Ek [X] = E X | xk, . . . , x0, νk, . . . , ν0 denote the conditional expectation of X with respect to the sigma algebra generated by the history of SAPALM. 2 Algorithm Description Algorithm 1 displays the SAPALM method. We highlight a few features of the algorithm which we discuss in more detail below. 2 Algorithm 1 SAPALM [Local view] Input: x ∈H 1: All processors in parallel do 2: loop 3: Randomly select a coordinate block j ∈{1, . . . , m} 4: Read x from shared memory 5: Compute g = ∇jf(x) + νj 6: Choose stepsize γj ∈R++ ▷According to Assumption 3 7: xj ←proxγjrj(xj −γjg) ▷According to (2) • Inconsistent iterates. Other processors may write updates to x in the time required to read x from memory. • Coordinate blocks. When the coordinate blocks xj are low dimensional, it reduces the likelihood that one update will be immediately erased by another, simultaneous update. • Noise. The noise ν ∈H is a random variable that we use to model injected noise. It can be set to 0, or chosen to accelerate each iteration, or to avoid saddle points. Algorithm 1 has an equivalent (mathematical) description which we present in Algorithm 2, using an iteration counter k which is incremented each time a processor completes an update. This iteration counter is not required by the processors themselves to compute the updates. In Algorithm 1, a processor might not have access to the shared-memory’s global state, xk, at iteration k. Rather, because all processors can continuously update the global state while other processors are reading, local processors might only read the inconsistently delayed iterate xk−dk = (xk−dk,1 1 , . . . , xk−dk,m m ), where the delays dk are integers less than τ, and xl = x0 when l < 0. Algorithm 2 SAPALM [Global view] Input: x0 ∈H 1: for k ∈N do 2: Randomly select a coordinate block jk ∈{1, . . . , m} 3: Read xk−dk = (xk−dk,1 1 , . . . , xk−dk,m m ) from shared memory 4: Compute gk = ∇jkf(xk−dk) + νk jk 5: Choose stepsize γk jk ∈R++ ▷According to Assumption 3 6: for j = 1, . . . , m do 7: if j = jk then 8: xk+1 jk ←proxγk jk rjk (xk jk −γk jkgk) ▷According to (2) 9: else 10: xk+1 j ←xk j 2.1 Assumptions on the Delay, Independence, Variance, and Stepsizes Assumption 1 (Bounded Delay). There exists some τ ∈N such that, for all k ∈N, the sequence of coordinate delays lie within dk ∈{0, . . . , τ}m. Assumption 2 (Independence). The indices {jk}k∈N are uniformly distributed and collectively IID. They are independent from the history of the algorithm xk, . . . , x0, νk, . . . , ν0 for all k ∈N. We employ two possible restrictions on the noise sequence νk and the sequence of allowable stepsizes γk j , all of which lead to different convergence rates: Assumption 3 (Noise Regimes and Stepsizes). Let σ2 k := Ek ∥νk∥2 denote the expected squared norm of the noise, and let a ∈(1, ∞). Assume that Ek νk = 0 and that there is a sequence of weights {ck}k∈N ⊆[1, ∞) such that (∀k ∈N) , (∀j ∈{1, . . . , m}) γk j := 1 ack(Lj + 2Lτm−1/2). 3 which we choose using the following two rules, both of which depend on the growth of σk: Summable. P∞ k=0 σ2 k < ∞ =⇒ck ≡1; α-Diminishing. (α ∈(0, 1)) σ2 k = O((k + 1)−α) =⇒ck = Θ((k + 1)(1−α)). More noise, measured by σk, results in worse convergence rates and stricter requirements regarding which stepsizes can be chosen. We provide two stepsize choices which, depending on the noise regime, interpolate between Θ(1) and Θ(k1−α) for any α ∈(0, 1). Larger stepsizes lead to convergence rates of order O(k−1), while smaller ones lead to order O(k−α). 2.2 Algorithm Features Inconsistent Asynchronous Reading. SAPALM allows asynchronous access patterns. A processor may, at any time, and without notifying other processors: 1. Read. While other processors are writing to shared-memory, read the possibly out-of-sync, delayed coordinates xk−dk,1 1 , . . . , xk−dk,m m . 2. Compute. Locally, compute the partial gradient ∇jkf(xk−dk,1 1 , . . . , xk−dk,m m ). 3. Write. After computing the gradient, replace the jkth coordinate with xk+1 jk ∈argmin y rjk(y) + ⟨∇jkf(xk−dk) + νk jk, y −xk jk⟩+ 1 2γk jk ∥y −xk jk∥2. Uncoordinated access eliminates waiting time for processors, which speeds up computation. The processors are blissfully ignorant of any conflict between their actions, and the paradoxes these conflicts entail: for example, the states xk−dk,1 1 , . . . , xk−dk,m m need never have simultaneously existed in memory. Although we write the method with a global counter k, the asynchronous processors need not be aware of it; and the requirement that the delays dk remain bounded by τ does not demand coordination, but rather serves only to define τ. What Does the Noise Model Capture? SAPALM is the first asynchronous PALM algorithm to allow and analyze noisy updates. The stochastic noise, νk, captures three phenomena: 1. Computational Error. Noise due to random computational error. 2. Avoiding Saddles. Noise deliberately injected for the purpose of avoiding saddles, as in [7]. 3. Stochastic Gradients. Noise due to stochastic approximations of delayed gradients. Of course, the noise model also captures any combination of the above phenomena. The last one is, perhaps, the most interesting: it allows us to prove convergence for a stochastic- or minibatch-gradient version of APALM, rather than requiring processors to compute a full (delayed) gradient. Stochastic gradients can be computed faster than their batch counterparts, allowing more frequent updates. 2.3 SAPALM as an Asynchronous Block Mini-Batch Stochastic Proximal-Gradient Method In Algorithm 1, any stochastic estimator ∇f(xk−dk; ξ) of the gradient may be used, as long as Ek ∇f(xk−dk; ξ) = ∇f(xk−dk), and Ek ∥∇f(xk−dk; ξ) −∇f(xk−dk)∥2 ≤σ2. In particular, if Problem 1 takes the form minimize x∈H Eξ [f(x1, . . . , xm; ξ)] + 1 m m X j=1 rj(xj), then, in Algorithm 2, the stochastic mini-batch estimator gk = m−1 k Pmk i=1 ∇f(xk−dk; ξi), where ξi are IID, may be used in place of ∇f(xk−dk) + νk. A quick calculation shows that Ek ∥gk −∇f(xk−dk)∥2 = O(m−1 k ). Thus, any increasing batch size mk = Ω((k + 1)−α), with α ∈(0, 1), conforms to Assumption 3. When nonsmooth regularizers are present, all known convergence rate results for nonconvex stochastic gradient algorithms require the use of increasing, rather than fixed, minibatch sizes; see [8, 22] for analogous, synchronous algorithms. 4 3 Convergence Theorem Measuring Convergence for Nonconvex Problems. For nonconvex problems, it is standard to measure convergence (to a stationary point) by the expected violation of stationarity, which for us is the (deterministic) quantity: Sk := E m X j=1
1 γk j (wk j −xk j ) + νk
2 ; where (∀j ∈{1, . . . , m}) wk j = proxγk j rj(xk j −γk j (∇jf(xk−dk) + νk j )). (3) A reduction to the case r ≡0 and dk = 0 reveals that wk j −xk j + γk j νk j = −γk j ∇jf(xk) and, hence, Sk = E ∥∇f(xk)∥2 . More generally, wk j −rk j + γk j νk j ∈−γk j (∂Lrj(wk j ) + ∇jf(xk−dk)) where ∂Lrj is the limiting subdifferential of rj [17] which, if rj is convex, reduces to the standard convex subdifferential familiar from [14]. A messy but straightforward calculation shows that our convergence rates for Sk can be converted to convergence rates for elements of ∂Lr(wk) + ∇f(wk). We present our main convergence theorem now and defer the proof to Section 4. Theorem 1 (SAPALM Convergence Rates). Let {xk}k∈N ⊆H be the SAPALM sequence created by Algorithm 2. Then, under Assumption 3 the following convergence rates hold: for all T ∈N, if {νk}k∈N is 1. Summable, then min k=0,...,T Sk ≤Ek∼PT [Sk] = O m(L + 2Lτm−1/2) T + 1 ; 2. α-Diminishing, then min k=0,...,T Sk ≤Ek∼PT [Sk] = O m(L + 2Lτm−1/2) + m log(T + 1) (T + 1)−α ; where, for all T ∈N, PT is the distribution {0, . . . , T} such that PT (X = k) ∝c−1 k . Effects of Delay and Linear Speedups. The m−1/2 term in the convergence rates presented in Theorem 1 prevents the delay τ from dominating our rates of convergence. In particular, as long as τ = O(√m), the convergence rate in the synchronous (τ = 0) and asynchronous cases are within a small constant factor of each other. In that case, because the work per iteration in the synchronous and asynchronous versions of SAPALM is the same, we expect a linear speedup: SAPALM with p processors will converge nearly p times faster than PALM, since the iteration counter will be updated p times as often. As a rule of thumb, τ is roughly proportional to the number of processors. Hence we can achieve a linear speedup on as many as O(√m) processors. 3.1 The Asynchronous Stochastic Block Gradient Method If the regularizer r is identically zero, then the noise νk need not vanish in the limit. The following theorem guarantees convergence of asynchronous stochastic block gradient descent with a constant minibatch size. See the supplemental material for a proof. Theorem 2 (SAPALM Convergence Rates (r ≡0)). Let {xk}k∈N ⊆H be the SAPALM sequence created by Algorithm 2 in the case that r ≡0. If, for all k ∈N, {Ek ∥νk∥2 }k∈N is bounded (not necessarily diminishing) and (∃a ∈(1, ∞)) , (∀k ∈N) , (∀j ∈{1, . . . , m}) γk j := 1 a √ k(Lj + 2Mτm−1/2) , then for all T ∈N, we have min k=0,...,T Sk ≤Ek∼PT [Sk] = O m(L + 2Lτm−1/2) + m log(T + 1) √ T + 1 , where PT is the distribution {0, . . . , T} such that PT (X = k) ∝k−1/2. 5 4 Convergence Analysis 4.1 The Asynchronous Lyapunov Function Key to the convergence of SAPALM is the following Lyapunov function, defined on H1+τ, which aggregates not only the current state of the algorithm, as is common in synchronous algorithms, but also the history of the algorithm over the delayed time steps: (∀x(0), x(1), . . . , x(τ) ∈H) Φ(x(0), x(1), . . . , x(τ)) = f(x(0)) + r(x(0)) + L 2√m τ X h=1 (τ −h + 1)∥x(h) −x(h −1)∥2. This Lyapunov function appears in our convergence analysis through the following inequality, which is proved in the supplemental material. Lemma 1 (Lyapunov Function Supermartingale Inequality). For all k ∈ N, let zk = (xk, . . . , xk−τ) ∈H1+τ. Then for all ϵ > 0, we have Ek Φ(zk+1) ≤Φ(zk) −1 2m m X j=1 1 γk j −(1 + ϵ) Lj + 2Lτ m1/2 ! Ek ∥wk j −xk j + γk j νk j ∥2 + m X j=1 γk j 1 + γk j (1 + ϵ−1) Lj + 2Lτm−1/2 Ek ∥νk j ∥2 2m where for all j ∈{1, . . . , m}, we have wk j = proxγk j rj(xk j −γk j (∇jf(xk−dk) + νk j )). In particular, for σk = 0, we can take ϵ = 0 and assume the last line is zero. Notice that if σk = ϵ = 0 and γk j is chosen as suggested in Algorithm 2, the (conditional) expected value of the Lyapunov function is strictly decreasing. If σk is nonzero, the factor ϵ will be used in concert with the stepsize γk j to ensure that noise does not cause the algorithm to diverge. 4.2 Proof of Theorem 1 For either noise regime, we define, for all k ∈N and j ∈{1, . . . , m}, the factor ϵ := 2−1(a −1). With the assumed choice of γk j and ϵ, Lemma 1 implies that the expected Lyapunov function decreases, up to a summable residual: with Ak j := wk j −xk j + γk j νk j , we have E Φ(zk+1) ≤E Φ(zk) −E 1 2m m X j=1 1 γk j 1 −1 + ϵ ack ∥Ak j ∥2 + m X j=1 γk j 1 + γk j (1 + ϵ−1) Lj + 2Lτm−1/2 E Ek ∥νk j ∥2 2m . (4) Two upper bounds follow from the the definition of γk j , the lower bound ck ≥1, and the straightforward inequalities (ack)−1(L + 2Mτm−1/2)−1 ≥γk j ≥(ack)−1(L + 2Mτm−1/2)−1: 1 ck Sk ≤ 1 (1−(1+ϵ)a−1) 2ma(L+2Lτm−1/2) E 1 2m m X j=1 1 γk j 1 −1 + ϵ ack ∥Ak j ∥2 and m X j=1 γk j 1 + γk j (1 + ϵ−1) Lj + 2Lτm−1/2 Ek ∥νk j ∥2 2m ≤(1 + (ack)−1(1 + ϵ−1))(σ2 k/ck) 2a(L + 2Lτm−1/2) . Now rearrange (4), use E Φ(zk+1) ≥infx∈H{f(x) + r(x)} and E Φ(z0) = f(x0) + r(x0), and sum (4) over k to get 1 PT k=0 c−1 k T X k=0 1 ck Sk ≤ f(x0) + r(x0) −infx∈H{f(x) + r(x)} + PT k=0 (1+(ack)−1(1+ϵ−1))(σ2 k/ck) 2a(L+2Lτm−1/2) (1−(1+ϵ)a−1) 2ma(L+2Lτm−1/2) PT k=0 c−1 k . 6 The left hand side of this inequality is bounded from below by mink=0,...,T Sk and is precisely the term Ek∼PT [Sk]. What remains to be shown is an upper bound on the right hand side, which we will now call RT . If the noise is summable, then ck ≡1, so PT k=0 c−1 k = (T +1) and PT k=0 σ2 k/ck < ∞, which implies that RT = O(m(L + 2Lτm−1/2)(T + 1)−1). If the noise is α-diminishing, then ck = Θ k(1−α) , so PT k=0 c−1 k = Θ((T + 1)α) and, because σ2 k/ck = O(k−1), there exists a B > 0 such that PT k=0 σ2 k/ck ≤PT k=0 Bk−1 = O(log(T +1)), which implies that RT = O((m(L+2Lτm−1/2)+ m log(T + 1))(T + 1)−α). 5 Numerical Experiments In this section, we present numerical results to confirm that SAPALM delivers the expected performance gains over PALM. We confirm two properties: 1) SAPALM converges to values nearly as low as PALM given the same number of iterations, 2) SAPALM exhibits a near-linear speedup as the number of workers increases. All experiments use an Intel Xeon machine with 2 sockets and 10 cores per socket. We use two different nonconvex matrix factorization problems to exhibit these properties, to which we apply two different SAPALM variants: one without noise, and one with stochastic gradient noise. For each of our examples, we generate a matrix A ∈Rn×n with iid standard normal entries, where n = 2000. Although SAPALM is intended for use on much larger problems, using a small problem size makes write conflicts more likely, and so serves as an ideal setting to understand how asynchrony affects convergence. 1. Sparse PCA with Asynchronous Block Coordinate Updates. We minimize argmin X,Y 1 2||A −XT Y ||2 F + λ∥X∥1 + λ∥Y ∥1, (5) where X ∈Rd×n and Y ∈Rd×n for some d ∈N. We solve this problem using SAPALM with no noise νk = 0. 2. Quadratically Regularized Firm Thresholding PCA with Asynchronous Stochastic Gradients. We minimize argmin X,Y 1 2||A −XT Y ||2 F + λ(∥X∥Firm + ∥Y ∥Firm) + µ 2 (∥X∥2 F + ∥Y ∥2 F ), (6) where X ∈Rd×n, Y ∈Rd×n, and ∥·∥Firm is the firm thresholding penalty proposed in [21]: a nonconvex, nonsmooth function whose proximal operator truncates small values to zero and preserves large values. We solve this problem using the stochastic gradient SAPALM variant from Section 2.3. In both experiments X and Y are treated as coordinate blocks. Notice that for this problem, the SAPALM update decouples over the entries of each coordinate block. Each worker updates its coordinate block (say, X) by cycling through the coordinates of X and updating each in turn, restarting at a random coordinate after each cycle. In Figures (1a) and (1c), we see objective function values plotted by iteration. By this metric, SAPALM performs as well as PALM, its single threaded variant; for the second problem, the curves for different thread counts all overlap. Note, in particular, that SAPALM does not diverge. But SAPALM can add additional workers to increment the iteration counter more quickly, as seen in Figure 1b, allowing SAPALM to outperform its single threaded variant. We measure the speedup Sk(p) of SAPALM by the (relative) time for p workers to produce k iterates Sk(p) = Tk(p) Tk(1), (7) where Tk(p) is the time to produce k iterates using p workers. Table 2 shows that SAPALM achieves near linear speedup for a range of variable sizes d. (Dashes — denote experiments not run.) 7 0 100 200 300 400 10 5 10 6 10 7 10 8 1 2 4 8 16 (a) Iterates vs objective 0 50 100 150 200 10 5 10 6 10 7 10 8 1 2 4 8 16 (b) Time (s) vs. objective 0 1 2 3 4 x 10 5 0.5 1 1.5 2 2.5 3 3.5x 10 7 1 2 4 8 16 (c) Iterates vs. objective 0 5 10 15 20 25 0.5 1 1.5 2 2.5 3 3.5x 10 7 1 2 4 8 16 (d) Time (s) vs. objective Figure 1: Sparse PCA ((1a) and (1b)) and Firm Thresholding PCA ((1c) and (1d)) tests for d = 10. threads d=10 d=20 d=100 1 65.9972 253.387 6144.9427 2 33.464 127.8973 – 4 17.5415 67.3267 – 8 9.2376 34.5614 833.5635 16 4.934 17.4362 416.8038 Table 1: Sparse PCA timing for 16 iterations by problem size and thread count. threads d=10 d=20 d=100 1 1 1 1 2 1.9722 1.9812 – 4 3.7623 3.7635 – 8 7.1444 7.3315 7.3719 16 13.376 14.5322 14.743 Table 2: Sparse PCA speedup for 16 iterations by problem size and thread count. Deviations from linearity can be attributed to a breakdown in the abstraction of a “shared memory” computer: as each worker modifies the “shared” variables X and Y , some communication is required to maintain cache coherency across all cores and processors. In addition, Intel Xeon processors share L3 cache between all cores on the processor. All threads compete for the same L3 cache space, slowing down each iteration. For small d, write conflicts are more likely; for large d, communication to maintain cache coherency dominates. 6 Discussion A few straightforward generalizations of our work are possible; we omit them to simplify notation. Removing the log factors. The log factors in Theorem 1 can easily be removed by fixing a maximum number of iterations for which we plan to run SAPALM and adjusting the ck factors accordingly, as in [14, Equation (3.2.10)]. Cluster points of {xk}k∈N. Using the strategy employed in [5], it’s possible to show that all cluster points of {xk}k∈N are (almost surely) stationary points of f + r. Weakened Assumptions on Lipschitz Constants. We can weaken our assumptions to allow Lj to vary: we can assume Lj(x1, . . . , xj−1, ·, xj+1, . . . , xm)-Lipschitz continuity each partial gradient ∇jf(x1, . . . , xj−1, ·, xj+1, . . . , xm) : Hj →Hj, for every x ∈H. 7 Conclusion This paper presented SAPALM, the first asynchronous parallel optimization method that provably converges on a large class of nonconvex, nonsmooth problems. We provide a convergence theory for SAPALM, and show that with the parameters suggested by this theory, SAPALM achieves a near linear speedup over serial PALM. As a special case, we provide the first convergence rate for (synchronous or asynchronous) stochastic block proximal gradient methods for nonconvex regularizers. These results give specific guidance to ensure fast convergence of practical asynchronous methods on a large class of important, nonconvex optimization problems, and pave the way towards a deeper understanding of stability of these methods in the presence of noise. 8 References [1] A. Agarwal and J. C. Duchi. Distributed delayed stochastic optimization. In 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), pages 5451–5452, Dec 2012. [2] D. P. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: Numerical Methods, volume 23. [3] J. Bolte, S. Sabach, and M. Teboulle. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Mathematical Programming, 146(1-2):459–494, 2014. [4] D. Davis. SMART: The Stochastic Monotone Aggregated Root-Finding Algorithm. arXiv preprint arXiv:1601.00698, 2016. [5] D. Davis. The Asynchronous PALM Algorithm for Nonsmooth Nonconvex Problems. arXiv preprint arXiv:1604.00526, 2016. [6] J. Dean, G. Corrado, R. Monga, K. Chen, M. Devin, M. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, Q. V. Le, and A. Y. Ng. Large Scale Distributed Deep Networks. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1223–1231. Curran Associates, Inc., 2012. [7] R. Ge, F. Huang, C. Jin, and Y. Yuan. Escaping from saddle points—online stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, pages 797–842, 2015. [8] S. Ghadimi, G. Lan, and H. Zhang. Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1):267–305, 2016. [9] M. Hong. A distributed, asynchronous and incremental algorithm for nonconvex optimization: An admm based approach. arXiv preprint arXiv:1412.6058, 2014. [10] X. Lian, Y. Huang, Y. Li, and J. Liu. Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization. In Advances in Neural Information Processing Systems, pages 2719–2727, 2015. [11] J. Liu, S. J. Wright, C. Ré, V. Bittorf, and S. Sridhar. An Asynchronous Parallel Stochastic Coordinate Descent Algorithm. Journal of Machine Learning Research, 16:285–322, 2015. [12] J. Liu, S. J. Wright, and S. Sridhar. An Asynchronous Parallel Randomized Kaczmarz Algorithm. arXiv preprint arXiv:1401.4780, 2014. [13] H. Mania, X. Pan, D. Papailiopoulos, B. Recht, K. Ramchandran, and M. I. Jordan. Perturbed Iterate Analysis for Asynchronous Stochastic Optimization. arXiv preprint arXiv:1507.06970, 2015. [14] Y. Nesterov. Introductory Lectures on Convex Optimization : A Basic Course. Applied optimization. Kluwer Academic Publ., Boston, Dordrecht, London, 2004. [15] Z. Peng, Y. Xu, M. Yan, and W. Yin. ARock: an Algorithmic Framework for Asynchronous Parallel Coordinate Updates. arXiv preprint arXiv:1506.02396, 2015. [16] B. Recht, C. Re, S. Wright, and F. Niu. Hogwild: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. In Advances in Neural Information Processing Systems, pages 693–701, 2011. [17] R. T. Rockafellar and R. J.-B. Wets. Variational Analysis, volume 317. Springer Science & Business Media, 2009. [18] P. Tseng. On the Rate of Convergence of a Partially Asynchronous Gradient Projection Algorithm. SIAM Journal on Optimization, 1(4):603–619, 1991. [19] J. Tsitsiklis, D. Bertsekas, and M. Athans. Distributed asynchronous deterministic and stochastic gradient optimization algorithms. IEEE Transactions on Automatic Control, 31(9):803–812, Sep 1986. [20] M. Udell, C. Horn, R. Zadeh, and S. Boyd. Generalized Low Rank Models. arXiv preprint arXiv:1410.0342, 2014. [21] J. Woodworth and R. Chartrand. Compressed sensing recovery via nonconvex shrinkage penalties. arXiv preprint arXiv:1504.02923, 2015. [22] Y. Xu and W. Yin. Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization. SIAM Journal on Optimization, 25(3):1686–1716, 2015. [23] H. Yun, H.-F. Yu, C.-J. Hsieh, S. V. N. Vishwanathan, and I. Dhillon. NOMAD: Non-locking, Stochastic Multi-machine Algorithm for Asynchronous and Decentralized Matrix Completion. Proc. VLDB Endow., 7(11):975–986, July 2014. 9 | 2016 | 102 |
5,998 | Coresets for Scalable Bayesian Logistic Regression Jonathan H. Huggins Trevor Campbell Tamara Broderick Computer Science and Artificial Intelligence Laboratory, MIT {jhuggins@, tdjc@, tbroderick@csail.}mit.edu Abstract The use of Bayesian methods in large-scale data settings is attractive because of the rich hierarchical models, uncertainty quantification, and prior specification they provide. Standard Bayesian inference algorithms are computationally expensive, however, making their direct application to large datasets difficult or infeasible. Recent work on scaling Bayesian inference has focused on modifying the underlying algorithms to, for example, use only a random data subsample at each iteration. We leverage the insight that data is often redundant to instead obtain a weighted subset of the data (called a coreset) that is much smaller than the original dataset. We can then use this small coreset in any number of existing posterior inference algorithms without modification. In this paper, we develop an efficient coreset construction algorithm for Bayesian logistic regression models. We provide theoretical guarantees on the size and approximation quality of the coreset – both for fixed, known datasets, and in expectation for a wide class of data generative models. Crucially, the proposed approach also permits efficient construction of the coreset in both streaming and parallel settings, with minimal additional effort. We demonstrate the efficacy of our approach on a number of synthetic and real-world datasets, and find that, in practice, the size of the coreset is independent of the original dataset size. Furthermore, constructing the coreset takes a negligible amount of time compared to that required to run MCMC on it. 1 Introduction Large-scale datasets, comprising tens or hundreds of millions of observations, are becoming the norm in scientific and commercial applications ranging from population genetics to advertising. At such scales even simple operations, such as examining each data point a small number of times, become burdensome; it is sometimes not possible to fit all data in the physical memory of a single machine. These constraints have, in the past, limited practitioners to relatively simple statistical modeling approaches. However, the rich hierarchical models, uncertainty quantification, and prior specification provided by Bayesian methods have motivated substantial recent effort in making Bayesian inference procedures, which are often computationally expensive, scale to the large-data setting. The standard approach to Bayesian inference for large-scale data is to modify a specific inference algorithm, such as MCMC or variational Bayes, to handle distributed or streaming processing of data. Examples include subsampling and streaming methods for variational Bayes [6, 7, 16], subsampling methods for MCMC [4, 18, 24], and distributed “consensus” methods for MCMC [8, 19, 21, 22]. Existing methods, however, suffer from both practical and theoretical limitations. Stochastic variational inference [16] and subsampling MCMC methods use a new random subset of the data at each iteration, which requires random access to the data and hence is infeasible for very large datasets that do not fit into memory. Furthermore, in practice, subsampling MCMC methods have been found to require examining a constant fraction of the data at each iteration, severely limiting the computational gains obtained [5, 23]. More scalable methods such as consensus MCMC [19, 21, 22] 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. and streaming variational Bayes [6, 7] lead to gains in computational efficiency, but lack rigorous justification and provide no guarantees on the quality of inference. An important insight in the large-scale setting is that much of the data is often redundant, though there may also be a small set of data points that are distinctive. For example, in a large document corpus, one news article about a hockey game may serve as an excellent representative of hundreds or thousands of other similar pieces about hockey games. However, there may only be a few articles about luge, so it is also important to include at least one article about luge. Similarly, one individual’s genetic information may serve as a strong representative of other individuals from the same ancestral population admixture, though some individuals may be genetic outliers. We leverage data redundancy to develop a scalable Bayesian inference framework that modifies the dataset instead of the common practice of modifying the inference algorithm. Our method, which can be thought of as a preprocessing step, constructs a coreset – a small, weighted subset of the data that approximates the full dataset [1, 9] – that can be used in many standard inference procedures to provide posterior approximations with guaranteed quality. The scalability of posterior inference with a coreset thus simply depends on the coreset’s growth with the full dataset size. To the best of our knowledge, coresets have not previously been used in a Bayesian setting. The concept of coresets originated in computational geometry (e.g. [1]), but then became popular in theoretical computer science as a way to efficiently solve clustering problems such as k-means and PCA (see [9, 11] and references therein). Coreset research in the machine learning community has focused on scalable clustering in the optimization setting [3, 17], with the exception of Feldman et al. [10], who developed a coreset algorithm for Gaussian mixture models. Coreset-like ideas have previously been explored for maximum likelihood-learning of logistic regression models, though these methods either lack rigorous justification or have only asymptotic guarantees (see [15] and references therein). The job of the coreset in the Bayesian setting is to provide an approximation of the full data loglikelihood up to a multiplicative error uniformly over the parameter space. As this paper is the first foray into applying coresets in Bayesian inference, we begin with a theoretical analysis of the quality of the posterior distribution obtained from such an approximate log-likelihood. The remainder of the paper develops the efficient construction of small coresets for Bayesian logistic regression, a useful and widely-used model for the ubiquitous problem of binary classification. We develop a coreset construction algorithm, the output of which uniformly approximates the full data log-likelihood over parameter values in a ball with a user-specified radius. The approximation guarantee holds for a given dataset with high probability. We also obtain results showing that the boundedness of the parameter space is necessary for the construction of a nontrivial coreset, as well as results characterizing the algorithm’s expected performance under a wide class of data-generating distributions. Our proposed algorithm is applicable in both the streaming and distributed computation settings, and the coreset can then be used by any inference algorithm which accesses the (gradient of the) log-likelihood as a black box. Although our coreset algorithm is specifically for logistic regression, our approach is broadly applicable to other Bayesian generative models. Experiments on a variety of synthetic and real-world datasets validate our approach and demonstrate robustness to the choice of algorithm hyperparameters. An empirical comparison to random subsampling shows that, in many cases, coreset-based posteriors are orders of magnitude better in terms of maximum mean discrepancy, including on a challenging 100-dimensional real-world dataset. Crucially, our coreset construction algorithm adds negligible computational overhead to the inference procedure. All proofs are deferred to the Supplementary Material. 2 Problem Setting We begin with the general problem of Bayesian posterior inference. Let D = {(Xn, Yn)}N n=1 be a dataset, where Xn ∈X is a vector of covariates and Yn ∈Y is an observation. Let π0(θ) be a prior density on a parameter θ ∈Θ and let p(Yn | Xn, θ) be the likelihood of observation n given the parameter θ. The Bayesian posterior is given by the density πN(θ), where πN(θ) := exp(LN(θ))π0(θ) EN , LN(θ) := N X n=1 ln p(Yn | Xn, θ), EN := Z exp(LN(θ))π0(θ) dθ. 2 Algorithm 1 Construction of logistic regression coreset Require: Data D, k-clustering Q, radius R > 0, tolerance ε > 0, failure rate δ ∈(0, 1) 1: for n = 1, . . . , N do ▷calculate sensitivity upper bounds using the k-clustering 2: mn ← N 1+Pk i=1 |G(−n) i |e −R∥¯ Z(−n) G,i −Zn∥2 3: end for 4: ¯mN ←1 N PN n=1 mn 5: M ← c ¯mN ε2 [(D + 1) log ¯mN + log(1/δ)] ▷coreset size; c is from proof of Theorem B.1 6: for n = 1, . . . , N do 7: pn ← mn N ¯mN ▷importance weights of data 8: end for 9: (K1, . . . , KN) ∼Multi(M, (pn)N n=1) ▷sample data for coreset 10: for n = 1, . . . , N do ▷calculate coreset weights 11: γn ← Kn pnM 12: end for 13: ˜D ←{(γn, Xn, Yn) | γn > 0} ▷only keep data points with non-zero weights 14: return ˜D Our aim is to construct a weighted dataset ˜D = {(γm, ˜Xm, ˜Ym)}M m=1 with M ≪N such that the weighted log-likelihood ˜LN(θ) = PM m=1 γm ln p( ˜Yn | ˜Xm, θ) satisfies |LN(θ) −˜LN(θ)| ≤ε|LN(θ)|, ∀θ ∈Θ. (1) If ˜D satisfies Eq. (1), it is called an ε-coreset of D, and the approximate posterior ˜πN(θ) = exp( ˜LN(θ))π0(θ) ˜EN , with ˜EN = Z exp( ˜LN(θ))π0(θ) dθ, has a marginal likelihood ˜EN which approximates the true marginal likelihood EN, shown by Proposition 2.1. Thus, from a Bayesian perspective, the ε-coreset is a useful notion of approximation. Proposition 2.1. Let L(θ) and ˜L(θ) be arbitrary non-positive log-likelihood functions that satisfy |L(θ) −˜L(θ)| ≤ε|L(θ)| for all θ ∈Θ. Then for any prior π0(θ) such that the marginal likelihoods E = Z exp(L(θ))π0(θ) dθ and ˜E = Z exp( ˜L(θ))π0(θ) dθ are finite, the marginal likelihoods satisfy | ln E −ln ˜E| ≤ε| ln E|. 3 Coresets for Logistic Regression 3.1 Coreset Construction In logistic regression, the covariates are real feature vectors Xn ∈RD, the observations are labels Yn ∈{−1, 1}, Θ ⊆RD, and the likelihood is defined as p(Yn | Xn, θ) = plogistic(Yn | Xn, θ) := 1 1 + exp (−YnXn · θ). The analysis in this work allows any prior π0(θ); common choices are the Gaussian, Cauchy [12], and spike-and-slab [13]. For notational brevity, we define Zn := YnXn, and let φ(s) := ln(1 + exp(−s)). Choosing the optimal ϵ-coreset is not computationally feasible, so we take a less direct approach. We design our coreset construction algorithm and prove its correctness using a quantity σn(Θ) called the sensitivity [9], which quantifies the redundancy of a particular data point n – the larger the sensitivity, the less redundant. In the setting of logistic regression, we have that the sensitivity is σn(Θ) := sup θ∈Θ N φ(Zn · θ) PN ℓ=1 φ(Zℓ· θ) . 3 Intuitively, σn(Θ) captures how much influence data point n has on the log-likelihood LN(θ) when varying the parameter θ ∈Θ, and thus data points with high sensitivity should be included in the coreset. Evaluating σn(Θ) exactly is not tractable, however, so an upper bound mn ≥σn(Θ) must be used in its place. Thus, the key challenge is to efficiently compute a tight upper bound on the sensitivity. For the moment we will consider Θ = BR for any R > 0, where BR := {θ ∈RD | ∥θ∥2 ≤R}; We discuss the case of Θ = RD shortly. Choosing the parameter space to be a Euclidean ball is reasonable since data is usually preprocessed to have mean zero and variance 1 (or, for sparse data, to be between -1 and 1), so each component of θ is typically in a range close to zero (e.g. between -4 and 4) [12]. The idea behind our sensitivity upper bound construction is that we would expect data points that are bunched together to be redundant while data points that are far from from other data have a large effect on inferences. Clustering is an effective way to summarize data and detect outliers, so we will use a k-clustering of the data D to construct the sensitivity bound. A k-clustering is given by k cluster centers Q = {Q1, . . . , Qk}. Let Gi := {Zn | i = arg minj ∥Qj−Zn∥2} be the set of vectors closest to center Qi and let G(−n) i := Gi \ {Zn}. Define Z(−n) G,i to be a uniform random vector from G(−n) i and let ¯Z(−n) G,i := E[Z(−n) G,i ] be its mean. The following lemma uses a k-clustering to establish an efficiently computable upper bound on σn(BR): Lemma 3.1. For any k-clustering Q, σn(BR) ≤mn := N 1 + Pk i=1 |G(−n) i |e−R∥¯ Z(−n) G,i −Zn∥2 . (2) Furthermore, mn can be calculated in O(k) time. The bound in Eq. (2) captures the intuition that if the data forms tight clusters (that is, each Zn is close to one of the cluster centers), we expect each cluster to be well-represented by a small number of typical data points. For example, if Zn ∈Gi, ∥¯Z(−n) G,i −Zn∥2 is small, and |G(−n) i | = Θ(N), then σn(BR) = O(1). We use the (normalized) sensitivity bounds obtained from Lemma 3.1 to form an importance distribution (pn)N n=1 from which to sample the coreset. If we sample Zn, then we assign it weight γn proportional to 1/pn. The size of the coreset depends on the mean sensitivity bound, the desired error ε, and a quantity closely related to the VC dimension of θ 7→φ(θ · Z), which we show is D + 1. Combining these pieces we obtain Algorithm 1, which constructs an ε-coreset with high probability by Theorem 3.2. Theorem 3.2. Fix ε > 0, δ ∈(0, 1), and R > 0. Consider a dataset D with k-clustering Q. With probability at least 1 −δ, Algorithm 1 with inputs (D, Q, R, ε, δ) constructs an ε-coreset of D for logistic regression with parameter space Θ = BR. Furthermore, Algorithm 1 runs in O(Nk) time. Remark 3.3. The coreset algorithm is efficient with an O(Nk) running time. However, the algorithm requires a k-clustering, which must also be constructed. A high-quality clustering can be obtained cheaply via k-means++ in O(Nk) time [2], although a coreset algorithm could also be used. Examining Algorithm 1, we see that the coreset size M is of order ¯mN log ¯mN, where ¯mN = 1 N P n mn. So for M to be smaller than N, at a minimum, ¯mN should satisfy ¯mN = ˜o(N),1 and preferably ¯mN = O(1). Indeed, for the coreset size to be small, it is critical that (a) Θ is chosen such that most of the sensitivities satisfy σn(Θ) ≪N (since N is the maximum possible sensitivity), (b) each upper bound mn is close to σn(Θ), and (c) ideally, that ¯mN is bounded by a constant. In Section 3.2, we address (a) by providing sensitivity lower bounds, thereby showing that the constraint Θ = BR is necessary for nontrivial sensitivities even for “typical” (i.e. nonpathological) data. We then apply our lower bounds to address (b) and show that our bound in Lemma 3.1 is nearly tight. In Section 3.3, we address (c) by establishing the expected performance of the bound in Lemma 3.1 for a wide class of data-generating distributions. 1Recall that the tilde notation suppresses logarithmic terms. 4 3.2 Sensitivity Lower Bounds We now develop lower bounds on the sensitivity to demonstrate that essentially we must limit ourselves to bounded Θ,2 thus making our choice of Θ = BR a natural one, and to show that the sensitivity upper bound from Lemma 3.1 is nearly tight. We begin by showing that in both the worst case and the average case, for all n, σn(RD) = N, the maximum possible sensitivity – even when the Zn are arbitrarily close. Intuitively, the reason for the worst-case behavior is that if there is a separating hyperplane between a data point Zn and the remaining data points, and θ is in the direction of that hyperplane, then when ∥θ∥2 becomes very large, Zn becomes arbitrarily more important than any other data point. Theorem 3.4. For any D ≥3, N ∈N and 0 < ϵ′ < 1, there exists ϵ > 0 and unit vectors Z1, . . . , ZN ∈RD such that for all pairs n, n′, Zn · Zn′ ≥1 −ϵ′ and for all R > 0 and n, σn(BR) ≥ N 1 + (N −1)e−Rϵ √ ϵ′ /4 , and hence σn(RD) = N. The proof of Theorem 3.4 is based on choosing N distinct unit vectors V1, . . . , VN ∈RD−1 and setting ϵ = 1 −maxn̸=n′ Vn · Vn′ > 0. But what is a “typical” value for ϵ? In the case of the vectors being uniformly distributed on the unit sphere, we have the following scaling for ϵ as N increases: Proposition 3.5. If V1, . . . , VN are independent and uniformly distributed on the unit sphere SD := {v ∈RD | ∥v∥= 1} with D ≥2, then with high probability 1 −max n̸=n′ Vn · Vn′ ≥CDN −4/(D−1), where CD is a constant depending only on D. Furthermore, N can be exponential in D even with ϵ remaining very close to 1: Proposition 3.6. For N = ⌊exp((1 −ϵ)2D/4)/ √ 2 ⌋, and V1, . . . , VN i.i.d. such that Vni = ± 1 √ D with probability 1/2, then with probability at least 1/2, 1 −maxn̸=n′ Vn · Vn′ ≥ϵ. Propositions 3.5 and 3.6 demonstrate that the data vectors Zn found in Theorem 3.4 are, in two different senses, “typical” vectors and should not be thought of as worst-case data only occurring in some “negligible” or zero-measure set. These three results thus demonstrate that it is necessary to restrict attention to bounded Θ. We can also use Theorem 3.4 to show that our sensitivity upper bound is nearly tight. Corollary 3.7. For the data Z1, . . . , ZN from Theorem 3.4, N 1 + (N −1)e−Rϵ √ ϵ′ /4 ≤σn(BR) ≤ N 1 + (N −1)e−R √ 2ϵ′ . 3.3 k-Clustering Sensitivity Bound Performance While Lemma 3.1 and Corollary 3.7 provide an upper bound on the sensitivity given a fixed dataset, we would also like to understand how the expected mean sensitivity increases with N. We might expect it to be finite since the logistic regression likelihood model is parametric; the coreset would thus be acting as a sort of approximate finite sufficient statistic. Proposition 3.8 characterizes the expected performance of the upper bound from Lemma 3.1 under a wide class of generating distributions. This result demonstrates that, under reasonable conditions, the expected value of ¯mN is bounded for all N. As a concrete example, Corollary 3.9 specializes Proposition 3.8 to data with a single shared Gaussian generating distribution. Proposition 3.8. Let Xn indep ∼N(µLn, ΣLn), where Ln indep ∼Multi(π1, π2, . . . ) is the mixture component responsible for generating Xn. For n = 1, . . . , N, let Yn ∈{−1, 1} be conditionally independent given Xn and set Zn = YnXn. Select 0 < r < 1/2, and define ηi = max(πi −N −r, 0). The clustering of the data implied by (Ln)N n=1 results in the expected sensitivity bound E [ ¯mN] ≤ 1 N −1 + P i ηie−R√ AiN −1η−1 i +Bi + X i:ηi>0 Ne−2N 1−2r N→∞ → 1 P i πie−R√Bi , 2Certain pathological datasets allow us to use unbounded Θ, but we do not assume we are given such data. 5 (a) (b) BINARY10 (c) WEBSPAM Figure 1: (a) Percentage of time spent creating the coreset relative to the total inference time (including 10,000 iterations of MCMC). Except for very small coreset sizes, coreset construction is a small fraction of the overall time. (b,c) The mean sensitivities for varying choices of R and k. When R varies k = 6 and when k varies R = 3. The mean sensitivity increases exponentially in R, as expected, but is robust to the choice of k. where Ai := Tr [Σi] + 1 −¯y2 i µT i µi, Bi := P jπj Tr [Σj] + ¯y2 j µT i µi −2¯yi¯yjµT i µj + µT j µj , and ¯yj = E [Y1|L1 = j]. Corollary 3.9. In the setting of Proposition 3.8, if π1 = 1 and all data is assigned to a single cluster, then there is a constant C such that for sufficiently large N, E [ ¯mN] ≤CeR√ Tr[Σ1]+(1−¯y2 1)µT 1 µ1 . 3.4 Streaming and Parallel Settings Algorithm 1 is a batch algorithm, but it can easily be used in parallel and streaming computation settings using standard methods from the coreset literature, which are based on the following two observations (cf. [10, Section 3.2]): 1. If ˜Di is an ε-coreset for Di, i = 1, 2, then ˜D1 ∪˜D2 is an ε-coreset for D1 ∪D2. 2. If ˜D is an ε-coreset for D and ˜D′ is an ε′-coreset for ˜D, then ˜D′ is an ε′′-coreset for D, where ε′′ := (1 + ε)(1 + ε′) −1. We can use these observations to merge coresets that were constructed either in parallel, or sequentially, in a binary tree. Coresets are computed for two data blocks, merged using observation 1, then compressed further using observation 2. The next two data blocks have coresets computed and merged/compressed in the same manner, then the coresets from blocks 1&2 and 3&4 can be merged/compressed analogously. We continue in this way and organize the merge/compress operations into a binary tree. Then, if there are B data blocks total, only log B blocks ever need be maintained simultaneously. In the streaming setting we would choose blocks of constant size, so B = O(N), while in the parallel setting B would be the number of machines available. 4 Experiments We evaluated the performance of the logistic regression coreset algorithm on a number of synthetic and real-world datasets. We used a maximum dataset size of 1 million examples because we wanted to be able to calculate the true posterior, which would be infeasible for extremely large datasets. Synthetic Data. We generated synthetic binary data according to the model Xnd indep ∼Bern(pd), d = 1, . . . , D and Yn indep ∼ plogistic(· | Xn, θ). The idea is to simulate data in which there are a small number of rarely occurring but highly predictive features, which is a common realworld phenomenon. We thus took p = (1, .2, .3, .5, .01, .1, .2, .007, .005, .001) and θ = (−3, 1.2, −.5, .8, 3, −1., −.7, 4, 3.5, 4.5) for the D = 10 experiments (BINARY10) and the first 5 components of p and θ for the D = 5 experiments (BINARY5). The generative model is the same one used by Scott et al. [21] and the first 5 components of p and θ correspond to those used in the 6 (a) BINARY5 (b) BINARY10 (c) MIXTURE (d) CHEMREACT (e) WEBSPAM (f) COVTYPE Figure 2: Polynomial MMD and negative test log-likelihood of random sampling and the logistic regression coreset algorithm for synthetic and real data with varying subset sizes (lower is better for all plots). For the synthetic data, N = 106 total data points were used and 103 additional data points were generated for testing. For the real data, 2,500 (resp. 50,000 and 29,000) data points of the CHEMREACT (resp. WEBSPAM and COVTYPE) dataset were held out for testing. One standard deviation error bars were obtained by repeating each experiment 20 times. Scott et al. experiments (given in [21, Table 1b]). We generated a synthetic mixture dataset with continuous covariates (MIXTURE) using a model similar to that of Han et al. [15]: Yn i.i.d. ∼Bern(1/2) and Xn indep ∼N(µYn, I), where µ−1 = (0, 0, 0, 0, 0, 1, 1, 1, 1, 1) and µ1 = (1, 1, 1, 1, 1, 0, 0, 0, 0, 0). Real-world Data. The CHEMREACT dataset consists of N = 26,733 chemicals, each with D = 100 properties. The goal is to predict whether each chemical is reactive. The WEBSPAM corpus consists of N = 350,000 web pages, approximately 60% of which are spam. The covariates consist of the D = 127 features that each appear in at least 25 documents. The cover type (COVTYPE) dataset consists of N = 581,012 cartographic observations with D = 54 features. The task is to predict the type of trees that are present at each observation location. 4.1 Scaling Properties of the Coreset Construction Algorithm Constructing Coresets. In order for coresets to be a worthwhile preprocessing step, it is critical that the time required to construct the coreset is small relative to the time needed to complete the inference procedure. We implemented the logistic regression coreset algorithm in Python.3 In Fig. 1a, we plot the relative time to construct the coreset for each type of dataset (k = 6) versus the total inference time, including 10,000 iterations of the MCMC procedure described in Section 4.2. Except for very small coreset sizes, the time to run MCMC dominates. 3More details on our implementation are provided in the Supplementary Material. Code to recreate all of our experiments is available at https://bitbucket.org/jhhuggins/lrcoresets. 7 Sensitivity. An important question is how the mean sensitivity ¯mN scales with N, as it determines how the size of the coreset scales with the data. Furthermore, ensuring that mean sensitivity is robust to the number of clusters k is critical since needing to adjust the algorithm hyperparameters for each dataset could lead to an unacceptable increase in computational burden. We also seek to understand how the radius R affects the mean sensitivity. Figs. 1b and 1c show the results of our scaling experiments on the BINARY10 and WEBSPAM data. The mean sensitivity is essentially constant across a range of dataset sizes. For both datasets the mean sensitivity is robust to the choice of k and scales exponentially in R, as we would expect from Lemma 3.1. 4.2 Posterior Approximation Quality Since the ultimate goal is to use coresets for Bayesian inference, the key empirical question is how well a posterior formed using a coreset approximates the true posterior distribution. We compared the coreset algorithm to random subsampling of data points, since that is the approach used in many existing scalable versions of variational inference and MCMC [4, 16]. Indeed, coreset-based importance sampling could be used as a drop-in replacement for the random subsampling used by these methods, though we leave the investigation of this idea for future work. Experimental Setup. We used adaptive Metropolis-adjusted Langevin algorithm (MALA) [14, 20] for posterior inference. For each dataset, we ran the coreset and random subsampling algorithms 20 times for each choice of subsample size M. We ran adaptive MALA for 100,000 iterations on the full dataset and each subsampled dataset. The subsampled datasets were fixed for the entirety of each run, in contrast to subsampling algorithms that resample the data at each iteration. For the synthetic datasets, which are lower dimensional, we used k = 4 while for the real-world datasets, which are higher dimensional, we used k = 6. We used a heuristic to choose R as large as was feasible while still obtaining moderate total sensitivity bounds. For a clustering Q of data D, let I := N −1 Pk i=1 P Z∈Gi ∥Z −Qi∥2 be the normalized k-means score. We chose R = a/ √ I , where a is a small constant. The idea is that, for i ∈[k] and Zn ∈Gi, we want R∥¯Z(−n) G,i −Zn∥2 ≈a on average, so the term exp{−R∥¯Z(−n) G,i −Zn∥2} in Eq. (2) is not too small and hence σn(BR) is not too large. Our experiments used a = 3. We obtained similar results for 4 ≤k ≤8 and 2.5 ≤ a ≤3.5, indicating that the logistic regression coreset algorithm has some robustness to the choice of these hyperparameters. We used negative test log-likelihood and maximum mean discrepancy (MMD) with a 3rd degree polynomial kernel as comparison metrics (so smaller is better). Synthetic Data Results. Figures 2a-2c show the results for synthetic data. In terms of test loglikelihood, coresets did as well as or outperformed random subsampling. In terms of MMD, the coreset posterior approximation typically outperformed random subsampling by 1-2 orders of magnitude and never did worse. These results suggest much can be gained by using coresets, with comparable performance to random subsampling in the worst case. Real-world Data Results. Figures 2d-2f show the results for real data. Using coresets led to better performance on CHEMREACT for small subset sizes. Because the dataset was fairly small and random subsampling was done without replacement, coresets were worse for larger subset sizes. Coreset and random subsampling performance was approximately the same for WEBSPAM. On WEBSPAM and COVTYPE, coresets either outperformed or did as well as random subsampling in terms MMD and test log-likelihood on almost all subset sizes. The only exception was that random subsampling was superior on WEBSPAM for the smallest subset set. We suspect this is due to the variance introduced by the importance sampling procedure used to generate the coreset. For both the synthetic and real-world data, in many cases we are able to obtain a high-quality logistic regression posterior approximation using a coreset that is many orders of magnitude smaller than the full dataset – sometimes just a few hundred data points. Using such a small coreset represents a substantial reduction in the memory and computational requirements of the Bayesian inference algorithm that uses the coreset for posterior inference. We expect that the use of coresets could lead similar gains for other Bayesian models. Designing coreset algorithms for other widely-used models is an exciting direction for future research. Acknowledgments All authors are supported by the Office of Naval Research under ONR MURI grant N000141110688. JHH is supported by a National Defense Science and Engineering Graduate (NDSEG) Fellowship. 8 References [1] P. K. Agarwal, S. Har-Peled, and K. R. Varadarajan. Geometric approximation via coresets. Combinatorial and computational geometry, 52:1–30, 2005. [2] D. Arthur and S. Vassilvitskii. k-means++: The advantages of careful seeding. In Symposium on Discrete Algorithms, pages 1027–1035. Society for Industrial and Applied Mathematics, 2007. [3] O. Bachem, M. Lucic, S. H. Hassani, and A. Krause. Approximate K-Means++ in Sublinear Time. In AAAI Conference on Artificial Intelligence, 2016. [4] R. Bardenet, A. Doucet, and C. C. Holmes. On Markov chain Monte Carlo methods for tall data. arXiv.org, May 2015. [5] M. J. Betancourt. The Fundamental Incompatibility of Hamiltonian Monte Carlo and Data Subsampling. In International Conference on Machine Learning, 2015. [6] T. Broderick, N. Boyd, A. Wibisono, A. C. Wilson, and M. I. Jordan. Streaming Variational Bayes. In Advances in Neural Information Processing Systems, Dec. 2013. [7] T. Campbell, J. Straub, J. W. Fisher, III, and J. P. How. Streaming, Distributed Variational Inference for Bayesian Nonparametrics. In Advances in Neural Information Processing Systems, 2015. [8] R. Entezari, R. V. Craiu, and J. S. Rosenthal. Likelihood Inflating Sampling Algorithm. arXiv.org, May 2016. [9] D. Feldman and M. Langberg. A unified framework for approximating and clustering data. In Symposium on Theory of Computing. ACM Request Permissions, June 2011. [10] D. Feldman, M. Faulkner, and A. Krause. Scalable training of mixture models via coresets. In Advances in Neural Information Processing Systems, pages 2142–2150, 2011. [11] D. Feldman, M. Schmidt, and C. Sohler. Turning big data into tiny data: Constant-size coresets for kmeans, pca and projective clustering. In Symposium on Discrete Algorithms, pages 1434–1453. SIAM, 2013. [12] A. Gelman, A. Jakulin, M. G. Pittau, and Y.-S. Su. A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4):1360–1383, Dec. 2008. [13] E. I. George and R. E. McCulloch. Variable selection via Gibbs sampling. Journal of the American Statistical Association, 88(423):881–889, 1993. [14] H. Haario, E. Saksman, and J. Tamminen. An adaptive Metropolis algorithm. Bernoulli, pages 223–242, 2001. [15] L. Han, T. Yang, and T. Zhang. Local Uncertainty Sampling for Large-Scale Multi-Class Logistic Regression. arXiv.org, Apr. 2016. [16] M. D. Hoffman, D. M. Blei, C. Wang, and J. Paisley. Stochastic variational inference. The Journal of Machine Learning Research, 14:1303–1347, 2013. [17] M. Lucic, O. Bachem, and A. Krause. Strong Coresets for Hard and Soft Bregman Clustering with Applications to Exponential Family Mixtures. In International Conference on Artificial Intelligence and Statistics, 2016. [18] D. Maclaurin and R. P. Adams. Firefly Monte Carlo: Exact MCMC with Subsets of Data. In Uncertainty in Artificial Intelligence, Mar. 2014. [19] M. Rabinovich, E. Angelino, and M. I. Jordan. Variational consensus Monte Carlo. arXiv.org, June 2015. [20] G. O. Roberts and R. L. Tweedie. Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli, 2(4):341–363, Nov. 1996. [21] S. L. Scott, A. W. Blocker, F. V. Bonassi, H. A. Chipman, E. I. George, and R. E. McCulloch. Bayes and big data: The consensus Monte Carlo algorithm. In Bayes 250, 2013. [22] S. Srivastava, V. Cevher, Q. Tran-Dinh, and D. Dunson. WASP: Scalable Bayes via barycenters of subset posteriors. In International Conference on Artificial Intelligence and Statistics, 2015. [23] Y. W. Teh, A. H. Thiery, and S. Vollmer. Consistency and fluctuations for stochastic gradient Langevin dynamics. Journal of Machine Learning Research, 17(7):1–33, Mar. 2016. [24] M. Welling and Y. W. Teh. Bayesian Learning via Stochastic Gradient Langevin Dynamics. In International Conference on Machine Learning, 2011. 9 | 2016 | 103 |
5,999 | Sorting out typicality with the inverse moment matrix SOS polynomial Jean-Bernard Lasserre LAAS-CNRS & IMT Université de Toulouse 31400 Toulouse, France lasserre@laas.fr Edouard Pauwels IRIT & IMT Université Toulouse 3 Paul Sabatier 31400 Toulouse, France edouard.pauwels@irit.fr Abstract We study a surprising phenomenon related to the representation of a cloud of data points using polynomials. We start with the previously unnoticed empirical observation that, given a collection (a cloud) of data points, the sublevel sets of a certain distinguished polynomial capture the shape of the cloud very accurately. This distinguished polynomial is a sum-of-squares (SOS) derived in a simple manner from the inverse of the empirical moment matrix. In fact, this SOS polynomial is directly related to orthogonal polynomials and the Christoffel function. This allows to generalize and interpret extremality properties of orthogonal polynomials and to provide a mathematical rationale for the observed phenomenon. Among diverse potential applications, we illustrate the relevance of our results on a network intrusion detection task for which we obtain performances similar to existing dedicated methods reported in the literature. 1 Introduction Capturing and summarizing the global shape of a cloud of points is at the heart of many data processing applications such as novelty detection, outlier detection as well as related unsupervised learning tasks such as clustering and density estimation. One of the main difficulties is to account for potentially complicated shapes in multidimensional spaces, or equivalently to account for non standard dependence relations between variables. Such relations become critical in applications, for example in fraud detection where a fraudulent action may be the dishonest combination of several actions, each of them being reasonable when considered on their own. Accounting for complicated shapes is also related to computational geometry and nonlinear algebra applications, for example integral computation [11] and reconstruction of sets from moments data [6, 7, 12]. Some of these problems have connections and potential applications in machine learning. The work presented in this paper brings together ideas from both disciplines, leading to a method which allows to encode in a simple manner the global shape and spatial concentration of points within a cloud. We start with a surprising (and apparently unnoticed) empirical observation. Given a collection of points, one may build up a distinguished sum-of-squares (SOS) polynomial whose coefficients (or Gram matrix) is the inverse of the empirical moment matrix (see Section 3). Its degree depends on how many moments are considered, a choice left to the user. Remarkably its sublevel sets capture much of the global shape of the cloud as illustrated in Figure 3. This phenomenon is not incidental as illustrated in many additional examples in Appendix A. To the best of our knowledge, this observation has remained unnoticed and the purpose of this paper is to report this empirical finding to the machine learning community and provide first elements toward a mathematical understanding as well as potential machine learning applications. 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. 15 10 30 80 210 460 950 950 950 1850 1850 1850 3570 3570 3570 6860 6860 6860 13810 13810 13810 30670 30670 67380 67380 146370 146370 333340 333340 Figure 1: Left: 1000 points in R2 and the level sets of the corresponding inverse moment matrix SOS polynomial Qµ,d (d = 4). The level set p+d d , which corresponds to the average value of Qµ,d, is represented in red. Right: 1040 points in R2 with size and color proportional to the value of inverse moment matrix SOS polynomial Qµ,d (d = 8). The proposed method is based on the computation of the coefficients of a very specific polynomial which depends solely on the empirical moments associated with the data points. From a practical perspective, this can be done via a single pass through the data, or even in an online fashion via a sequence of efficient Woodbury updates. Furthermore the computational cost of evaluating the polynomial does not depend on the number of data points which is a crucial difference with existing nonparametric methods such as nearest neighbors or kernel based methods [3]. On the other hand, this computation requires the inversion of a matrix whose size depends on the dimension of the problem (see Section 3). Therefore, the proposed framework is suited for moderate dimensions and potentially very large number of observations. In Section 4 we first describe an affine invariance result which suggests that the distinguished SOS polynomial captures very intrinsic properties of clouds of points. In a second step, we provide a mathematical interpretation that supports our empirical findings based on connections with orthogonal polynomials [5]. We propose a generalization of a well known extremality result for orthogonal univariate polynomials on the real line (or the complex plane) [16, Theorem 3.1.2]. As a consequence, the distinguished SOS polynomial of interest in this paper is understood as the unique optimal solution of a convex optimization problem: minimizing an average value over a structured set of positive polynomials. In addition, we revisit [16, Theorem 3.5.6] about the Christoffel function. The mathematics behind provide a simple and intuitive explanation for the phenomenon that we empirically observed. Finally, in Section 5 we perform numerical experiments on KDD cup network intrusion dataset [13]. Evaluation of the distinguished SOS polynomial provides a score that we use as a measure of outlyingness to detect network intrusions (assuming that they correspond to outlier observations). We refer the reader to [3] for a discussion of available methods for this task. For the sake of a fair comparison we have reproduced the experiments performed in [18] for the same dataset. We report results similar to (and sometimes better than) those described in [18] which suggests that the method is comparable to other dedicated approaches for network intrusion detection, including robust estimation and Mahalanobis distance [8, 10], mixture models [14] and recurrent neural networks [18]. 2 Multivariate polynomials, moments and sums of squares Notations: We fix the ambient dimension to be p throughout the text. For example, we will manipulate vectors in Rp as well as p-variate polynomials with real coefficients. We denote by X a set of p variables X1, . . . , Xp which we will use in mathematical expressions defining polynomials. We identify monomials from the canonical basis of p-variate polynomials with their exponents in Np: we associate to α = (αi)i=1...p ∈Np the monomial Xα := Xα1 1 Xα2 2 . . . Xαp p which degree is deg(α) := Pp i=1 αi. We use the expressions <gl and ≤gl to denote the graded lexicographic order, a well ordering over p-variate monomials. This amounts to, first, use the canonical order on the 2 degree and, second, break ties in monomials with the same degree using the lexicographic order with X1 = a, X2 = b . . . For example, the monomials in two variables X1, X2, of degree less or equal to 3 listed in this order are given by: 1, X1, X2, X2 1, X1X2, X2 2, X3 1, X2 1X2, X1X2 2, X3 2. We denote by Np d, the set {α ∈Np; deg(α) ≤d} ordered by ≤gl. R[X] denotes the set of p-variate polynomials: linear combinations of monomials with real coefficients. The degree of a polynomial is the highest of the degrees of its monomials with nonzero coefficients1. We use the same notation, deg(·), to denote the degree of a polynomial or of an element of Np. For d ∈N, Rd[X] denotes the set of p-variate polynomials of degree less or equal to d. We set s(d) = p+d d , the number of monomials of degree less or equal to d. We will denote by vd(X) the vector of monomials of degree less or equal to d sorted by ≤gl. We let vd(X) := (Xα)α∈Np d ∈Rd[X]s(d). With this notation, we can write a polynomial P ∈Rd[X] as follows P(X) = ⟨p, vd(X)⟩for some real vector of coefficients p = (pα)α∈Np d ∈Rs(d) ordered using ≤gl. Given x = (xi)i=1...p ∈Rp, P(x) denotes the evaluation of P with the assignments X1 = x1, X2 = x2, . . . Xp = xp. Given a Borel probability measure µ and α ∈Np, yα(µ) denotes the moment α of µ: yα(µ) = R Rp xαdµ(x). Throughout the paper, we will only consider measures of which all moments are finite. Moment matrix: Given a Borel probability measure µ on Rp, the moment matrix of µ, Md(µ), is a matrix indexed by monomials of degree at most d ordered by ≤gl. For α, β ∈Np d, the corresponding entry in Md(µ) is defined by Md(µ)α,β := yα+β(µ), the moment α + β of µ. When p = 2, letting yα = yα(µ) for α ∈N2 4, we have M2(µ) : 1 X1 X2 X2 1 X1X2 X2 2 1 1 y10 y01 y20 y11 y02 X1 y10 y20 y11 y30 y21 y12 X2 y01 y11 y02 y21 y12 y03 X2 1 y20 y30 y21 y40 y31 y22 X1X2 y11 y21 y12 y31 y22 y13 X2 2 y02 y12 y03 y22 y13 y04 . Md(µ) is positive semidefinite for all d ∈N. Indeed, for any p ∈Rs(d), let P ∈Rd[X] be the polynomial with vector of coefficients p, we have pT Md(µ)p = R Rp P 2(x)dµ(x) ≥0. Furthermore, we have the identity Md(µ) = R Rp vd(x)vd(x)T dµ(x) where the integral is understood elementwise. Sum of squares (SOS): We denote by Σ[X] ⊂R[X] (resp. Σd[X] ⊂Rd[X]), the set of polynomials (resp. polynomials of degree at most d) which can be written as a sum of squares of polynomials. Let P ∈R2m[X] for some m ∈N, then P belongs to Σ2m[X] if there exists a finite J ⊂N and a family of polynomials Pj ∈Rm[X], j ∈J, such that P = P j∈J P 2 j . It is obvious that sum of squares polynomials are always nonnegative. A further interesting property is that this class of polynomials is connected with positive semidefiniteness. Indeed, P belongs to Σ2m[X] if and only if ∃Q ∈Rs(m)×s(m), Q ⪰0, P(x) = vd(x)T Qvd(x), ∀x ∈Rp. (1) As a consequence, every positive semidefinite matrix Q ∈Rs(m)×s(m) defines a polynomial in Σ2m[X] by using the representation in (1). 3 Empirical observations on the inverse moment matrix SOS polynomial The inverse moment-matrix SOS polynomial is associated to a measure µ which satisfies the following. Assumption 1 µ is a Borel probability measure on Rp with all its moments finite and Md(µ) is positive definite for a given d ∈N. Definition 1 Let µ, d satisfy Assumption 1. We call the SOS polynomial Qµ,d ∈Σ2d[X] defined by the application: x 7→ Qµ,d(x) := vd(x)T Md(µ)−1vd(x), x ∈Rp, (2) 1For the null polynomial, we use the convention that its degree is 0 and it is ≤gl smaller than all other monomials. 3 the inverse moment-matrix SOS polynomial of degree 2d associated to µ. Actually, connection to orthogonal polynomials will show that the inverse function x 7→Qµ,d(x)−1 is called the Christoffel function in the literature [16, 5] (see also Section 4). In the remainder of this section, we focus on the situation when µ corresponds to an empirical measure over n points in Rp which are fixed. So let x1, . . . , xn ∈Rp be a fixed set of points and let µ := 1 n Pn i=1 δxi where δx corresponds to the Dirac measure at x. In such a case the polynomial Qµ,d in (2) is determined only by the empirical moments up to degree 2d of our collection of points. Note that we also require that Md(µ) ≻0. In other words, the points x1, . . . , xn do not belong to an algebraic set defined by a polynomial of degree less or equal to d. We first describe empirical properties of inverse moment matrix SOS polynomial in this context of empirical measures. A mathematical intuition and further properties behind these observations are developped in Section 4. 3.1 Sublevel sets The starting point of our investigations is the following phenomenon which to the best of our knowledge has remained unnoticed in the literature. For the sake of clarity and simplicity we provide an illustration in the plane. Consider the following experiment in R2 for a fixed d ∈N: represent on the same graphic, the cloud of points {xi}i=1...n and the sublevel sets of SOS polynomial Qµ,d in R2 (equivalently, the superlevel sets of the Christoffel function). This is illustrated in the left panel of Figure 3. The collection of points consists of 500 simulations of two different Gaussians and the value of d is 4. The striking feature of this plot is that the level sets capture the global shape of the cloud of points quite accurately. In particular, the level set {x : Qµ,d(x) ≤ p+d d } captures most of the points. We could reproduce very similar observations on different shapes with various number of points in R2 and degree d (see Appendix A). 3.2 Measuring outlyingness An additional remark in a similar line is that Qµ,d tends to take higher values on points which are isolated from other points. Indeed in the left panel of Figure 3, the value of the polynomial tends to be smaller on the boundary of the cloud. This extends to situations where the collection of points correspond to shape with a high density of points with a few additional outliers. We reproduce a similar experiment on the right panel of Figure 3. In this example, 1000 points are sampled close to a ring shape and 40 additional points are sampled uniformly on a larger square. We do not represent the sublevel sets of Qµ,d here. Instead, the color and shape of the points are taken proportionally to the value of Qµ,d, with d = 8. First, the results confirm the observation of the previous paragraph, points that fall close to the ring shape tend to be smaller and points on the boundary of the ring shape are larger. Second, there is a clear increase in the size of the points that are relatively far away from the ring shape. This highlight the fact that Qµ,d tends to take higher value in less populated areas of the space. 3.3 Relation to maximum likelihood estimation If we fix d = 1, we recover the maximum likelihood estimation for the Gaussian, up to a constant additive factor. To see this, set µ = 1 n Pn i=1 xi and S = 1 n Pn i=1 xixT i . With this notation, we have the following block representation of the moment matrix, Md(µ) = 1 µT µ S Md(µ)−1 = 1 + µT V −1µ −µT V −1 −V −1µ V −1 , where V = S −µµT is the empirical covariance matrix and the expression for the inverse is given by Schur complement. In this case, we have Qµ,1(x) = 1 + (x −µ)T V −1(x −µ) for all x ∈Rp. We recognize the quadratic form that appears in the density function of the multivariate Gaussian with parameters estimated by maximum likelihood. This suggests a connection between the inverse SOS moment polynomial and maximum likelihood estimation. Unfortunately, this connection is difficult to generalize for higher values of d and we do not pursue the idea of interpreting the empirical observations of this section through the prism of maximum likelihood estimation and leave it for further research. Instead, we propose an alternative view in Section 4. 4 3.4 Computational aspects Recall that s(d) = p+d d is the number of p-variate monomials of degree up to d. The computation of Qµ,d requires O(ns(d)2) operations for the computation of the moment matrix and O(s(d)3) operations for the matrix inversion. The evaluation of Qµ,d requires O(s(d)2) operations. Estimating the coefficients of Qµ,d has a computational cost that depends only linearly in the number of points n. The cost of evaluating Qµ,d is constant with respect to the number of points n. This is an important contrast with kernel based or distance based methods (such as nearest neighbors and one class SVM) for density estimation or outlier detection since they usually require at least O(n2) operations for the evaluation of the model [3]. Moreover, this is well suited for online settings where inverse moment matrix computation can be done using rank one Woodbury updates [15, Section 2.7.1]. The dependence in the dimension p is of the order of pd for a fixed d. Similarly, the dependence in d is of the order of dp for a fixed dimension p and the joint dependence is exponential. Furthermore, Md(µ) has a Hankel structure which is known to produce ill conditioned matrices. This suggests that the direct computation and evaluation of Qµ,d will mostly make sense for moderate dimensions and degree d. In our experiments, for large d, the evaluation of Qµ,d remains quite stable, but the inversion leads to numerical error for higher values (around 20). 4 Invariance and interpretation through orthogonal polynomials The purpose of this section is to provide a mathematical rationale that explains the empirical observations made in Section 3. All the proofs are postponed to Appendix B. We fix a Borel probability measure µ on Rp which satisfies Assumption 1. Note that Md(µ) is always positive definite if µ is not supported on the zero set of a polynomial of degree at most d. Under Assumption 1, Md(µ) induces an inner product on Rs(d) and by extension on Rd[X] (see Section 2). This inner product is denoted by ⟨·, ·⟩µ and satisfies for any polynomials P, Q ∈Rd[X] with coefficients p, q ∈Rs(d), ⟨P, Q⟩µ := ⟨p, Md(µ)q⟩Rs(d) = Z Rp P(x)Q(x)dµ(x). We will also use the canonical inner product over Rd[X] which we write ⟨P, Q⟩Rd[X] := ⟨p, q⟩Rs(d) for any polynomials P, Q ∈Rd[X] with coefficients p, q ∈Rs(d). We will omit the subscripts for this canonical inner product and use ⟨·, ·⟩for both products. 4.1 Affine invariance It is worth noticing that the mapping x 7→Qµ,d(x) does not depend on the particular choice of vd(X) as a basis of Rd[X], any other basis would lead to the same mapping. This leads to the result that Qµ,d captures affine invariant properties of µ. Lemma 1 Let µ satisfy Assumption 1 and A ∈Rp×p, b ∈Rp define an invertible affine mapping on Rp, A: x →Ax+b. Then, the push foward measure, defined by ˜µ(S) = µ(A−1(S)) for all Borel sets S ⊂Rp, satisfies Assumption 1 (with the same d as µ) and for all x ∈Rp, Qµ,d(x) = Q˜µ,d(Ax + b). Lemma 1 is probably better understood when µ = 1/n Pn i=1 δxi as in Section 3. In this case, we have ˜µ = 1/n Pn i=1 δAxi+b and Lemma 1 asserts that the level sets of Q˜µ,d are simply the images of those of Qµ,d under the affine transformation x 7→Ax + b. This is illustrated in Appendix D. 4.2 Connection with orthogonal polynomials We define a classical [16, 5] family of orthonormal polynomials, {Pα}α∈Np d ordered according to ≤gl which satisfies for all α ∈Np d ⟨Pα, Xβ⟩= 0 if α <gl β, ⟨Pα, Pα⟩µ = 1, ⟨Pα, Xβ⟩µ = 0 if β <gl α, ⟨Pα, Xα⟩µ > 0. (3) It follows from (3) that ⟨Pα, Pβ⟩µ = 0 if α ̸= β. Existence and uniqueness of such a family is guaranteed by the Gram-Schmidt orthonormalization process following the ≤gl order, and by the 5 positivity of the moment matrix, see for instance [5, Theorem 3.1.11]. There exist determinantal formulae [9] and more precise description can be made for measures which have additional geometric properties, see [5] for many examples. Let Dd(µ) be the lower triangular matrix whose rows are the coefficients of the polynomials Pα defined in (3) ordered by ≤gl. It can be shown that Dd(µ) = Ld(µ)−T , where Ld(µ) is the Cholesky factorization of Md(µ). Furthermore, there is a direct relation with the inverse moment matrix as Md(µ)−1 = Dd(µ)T Dd(µ) [9, Proof of Theorem 3.1]. This has the following consequence. Lemma 2 Let µ satisfy Assumption 1, then Qµ,d = P α∈Np d P 2 α, where the family {Pα}α∈Np d is defined by (3) and R Rp Qµ,d(x)dµ(x) = s(d). That is, Qµ,d is a very specific and distinguished SOS polynomial, the sum of squares of the orthonormal basis elements {Pα}α∈Np d of Rd(X) (w.r.t. µ). Furthermore, the average value of Qµ,d with respect to µ is s(d) which corresponds to the red level set in left panel of Figure 3. 4.3 A variational formulation for the inverse moment matrix SOS polynomial In this section, we show that the family of polynomials {Pα}α∈Np d defined in (3) is the unique solution (up to a multiplicative constant) of a convex optimization problem over polynomials. This fact combined with Lemma 2 provides a mathematical rationale for the empirical observations outlined in Section 3. Consider the following optimization problem. min Qα,θα,α∈Np d 1 2 Z Rp X α∈Np d Qα(x)2dµ(x) (4) s.t. qαα ≥exp(θα), qαβ = 0, α, β ∈Np d, α <gl β, X α∈Np d θα = 0, where Qα(x) = P β∈Np d qαβxβ is a polynomial and θα is a real variable for each α ∈Np d. We first comment on problem (4). Let P = P α∈Np d Q2 α be the SOS polynomial appearing in the objective function of (4). The objective of (4) simply involves the average value of P with respect to µ. Let Sd ⊂Σd[X] be the set of such SOS polynomials P which have a sum of square decomposition satisfying the constraints of (4) (for some arbitrary value of the real variables {θα}α∈Np d). With this notation, problem (4) has the simple formulation minP ∈Sd 1 2 R Pdµ. Based on this formulation, problem (4) can be interpreted as balancing two antagonist targets. On one hand the minimization of the average value of the SOS polynomial P with respect to µ, on the other hand the avoidance of the trivial polynomial, enforced by the constraint that P ∈Sd. The constraint P ∈Sd is simple and natural. It ensures that P is a sum of squares of polynomials {Qα}α∈Np d, where the leading term of each Qα (according to the ordering ≤gl) is qααxα with qαα > 0 (and hence does not vanish). Inversely, using Cholesky factorization, for any SOS polynomial Q of degree 2d which coefficient matrix (see equation (1)) is positive definite, there exists a > 0 such that aQ ∈Sd. This suggests that Sd is a quite general class of nonvanishing SOS polynomials. The following result, which gives a relation between Qµ,d and solutions of (4), uses a generalization of [16, Theorem 3.1.2] to several orthogonal polynomials of several variables. Theorem 1 : Under Assumption 1, problem (4) is a convex optimization problem with a unique optimal solution (Q∗ α, θ∗ α), which satisfies Q∗ α = √ λPα, α ∈Np d, for some λ > 0. In particular, the distinguished SOS polynomial Qµ,d = P α∈Np d P 2 α = 1 λ P α∈Np d(Q∗ α)2, is (part of) the unique optimal solution of (4). Theorem 1 states that up to the scaling factor λ, the distinguished SOS polynomial Qµ,d is the unique optimal solution of problem (4). A detailed proof is provided in the Appendix B and we only sketch the main ideas here. First, it is remarkable that for each fixed α ∈Np d (and again up to a scaling factor) the polynomial Pα is the unique optimal solution of the problem: minQ n R Q2dµ : Q ∈Rd[X], Q(x) = xα + P β<glα qβ xβ o . This fact is well-known in the univariate case [16, Theorem 3.1.2] and does not seem to have been exploited in the literature, at 6 least for purposes similar to ours. So intuitively, P 2 α should be as close to 0 as possible on the support of µ. Problem (4) has similar properties and the constraint on the vector of weights θ enforces that, at an optimal solution, the contribution of R (Q∗ α)2 dµ to the overall sum in the criterion is the same for all α. Using Lemma 2 yields (up to a multiplicative constant) the polynomial Qµ,d. Other constraints on θ would yield different weighted sum of the squares P 2 α. This will be a subject of further investigations. To sum up, Theorem 1 provides a rationale for our observations. Indeed when solving (4), intuitively, Qµ,d should be close to 0 on average while remaining in a class of nonvanishing SOS polynomials. 4.4 Christoffel function and outlier detection The following result from [5, Theorem 3.5.6] draws a direct connection between Qµ,d and the Chritoffel function (the right hand side of (5)). Theorem 2 ([5]) Let Assumption 1 hold and let z ∈Rp be fixed, arbitrary. Then Qµ,d(z)−1 = min P ∈Rd[X] Z Rp P(x)2 dµ(x) : P(z) = 1 . (5) Theorem 2 provides a mathematical rationale for the use of Qµ,d for outlier or novelty detection purposes. Indeed, from Lemma 2 and equation (3), we have Qµ,d ≥1 on Rp. Furthermore, the solution of the minimization problem in (5) satisfies P(z)2 = 1 and µ x ∈Rp : P(x)2 ≤1 ≥ 1 −Qµ,d(z)−1 (by Markov’s inequality). Hence, for high values of Qµ,d(z), the sublevel set x ∈Rp : P(x)2 ≤1 contains most of the mass of µ while P(z)2 = 1. An illustration of this discussion is given in appendix E. Again the result of Theorem 2 does not seem to have been interpreted for purposes similar to ours. 5 Experiments on network intrusion datasets In addition to having its own mathematical interest, Theorem 1 can be exploited for various purposes. For instance, the sub-level sets of Qµ,d, and in particular {x ∈Rp : Qµ,d(x) ≤ p+d d }, can be used to encode a cloud of points in a simple and compact form. However in this section we focus on another potential application in anomaly detection. Empirical findings described in Section 3 suggest that the polynomial Qµ,d can be used to detect outliers in a collection of real vectors (with µ the empirical average). This is backed up by the results presented in Section 4. We illustrate these properties on a real world example. We choose the KDD cup 99 network intrusion dataset [13] consisting of network connection data, labeled as normal traffic or network intrusions. We follow [19] and [18] and construct five datasets consisting of labeled vectors in R3 with the following properties Dataset http smtp ftp-data ftp others Number of examples 567498 95156 30464 4091 5858 Proportions of attacks 0.004 0.0003 0.023 0.077 0.016 The details on the datasets construction are available in [19, 18] and reproduced in Appendix C. The main idea is to compute an outlyingness score (independant of the label) and compare outliers predicted by the score and network intrusion labels. The underlying assumption is that network intrusions correspond to infrequent abnormal behaviors and could be considered as outliers. We reproduce the same experiment as in [18, Section 5.4] using the value of Qµ,d from Definition 1 as an outlyingness score (with d = 3). The authors of [18] have compared different methods in the same experimental setting: robust estimation and Mahalanobis distance [8, 10], mixture models [14] and recurrent neural networks. The results are gathered in [18, Figure 7]. In the left panel of Figure 2 we represent the same performance measure for our approach: we first compute the value of Qµ,d for each datapoint and use it as an outlyingness score. We then display the proportion of correctly identified outliers, with score above a given threshold, as a function of the proportion of examples with score above the threshold (for different values of the threshold). The main comments are as follows. 7 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 % top outlyingness score % correctly identified outliers dataset http smtp ftp_data ftp others 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 Recall Precision d (AUPR) 1 (0.08) 2 (0.18) 3 (0.18) 4 (0.16) 5 (0.15) 6 (0.13) Figure 2: Left: reproduction of the results described in [18] with the evaluation of Qµ,d as an outlyingness score (d = 3). Right: precision-recall curves for different values of d (dataset “others”). • The inverse moment matrix SOS polynomial does detect network intrusions with varying performances on the five datasets. • Except for the “ftp-data dataset”, the global shape of these curves are very similar to results reported in [18, Figure 7] indicating that the proposed approach is comparable to other dedicated methods for intrusion detection in these four datasets. In a second experiment, we investigate the effect of changing the value of d on the performances. We focus on the “others” dataset because it is the most heterogeneous. We adopt a slightly different measure of performance and use precision recall (see for example [4]) to measure performances in identifying network intrusions (the higher the curve, the better). We call the area under such curves the AUPR. The right panel of Figure 2 represents these results. First, the case d = 1, which corresponds to vanilla Mahalanobis distance as outlined in Section 3.3, gives poor performances. Second, the global performances rapidly increase with d and then decrease and stabilize. This suggests that d can be used as a tuning parameter to control the “complexity” of Qµ,d. Indeed, 2d is the degree of the polynomial Qµ,d and it is expected that more complex models will identify more diverse classes of examples as outliers. In our case, this means identifying regular traffic as outliers while it actually does not correspond to intrusions. In general, a good heuristic regarding the tuning of d is to investigate performances on a well specified task in a preliminary experiment. 6 Future work An important question is the asymptotic regime when d →∞. Current state of knowledge suggests that, up to a correct scaling, the limit of the Christoffel functions (when known to exist) involves an edge effect term, related to the support of the measure, and the density of µ with respect to Lebesgue measure, see for example [2] for the Euclidean ball. It also suggests connections with the notion of equilibrium measure in potential theory [17, 1, 7]. Generalization and interpretation of these results in our context will be investigated in future work. Even though good approximations are obtained with low degree (at least in dimension 2 or 3), the approach involves the inversion of large ill conditioned Hankel matrices which reduces considerably the applicability for higher degrees and dimensions. A promising research line is to develop approximation procedures and advanced optimization and algebra tools so that the approach could scale computationally to higher dimensions and degrees. Finally, we did not touch the question of statistical accuracy. In the context of empirical processes, this will be very relevant to understand further potential applications in machine learning and reduce the gap between the abstract orthogonal polynomial theory and practical machine learning applications. Acknowledgments This work was partly supported by project ERC-ADG TAMING 666981, ERC-Advanced Grant of the European Research Council and grant number FA9550-15-1-0500 from the Air Force Office of Scientific Research, Air Force Material Command. 8 References [1] R. J. Berman (2009). Bergman kernels for weighted polynomials and weighted equilibrium measures of Cn . Indiana University Mathematics Journal, 58(4):1921–1946. [2] L. Bos, B. Della Vecchia and G. Mastroianni (1998). On the asymptotics of Christoffel functions for centrally symmetric weights functions on the ball in Rn. Rendiconti del Circolo Matematico di Palermo, 52:277–290. [3] V. Chandola, A. Banerjee and V. Kumar (2009). Anomaly detection: A survey. ACM computing surveys (CSUR) 41(3):15. [4] J. Davis and M. Goadrich (2006). The relationship between Precision-Recall and ROC curves. Proceedings of the 23rd international conference on Machine learning (pp. 233-240). ACM. [5] C.F. Dunkl and Y. Xu (2001). Orthogonal polynomials of several variables. Cambridge University Press. MR1827871. [6] G.H Golub, P. Milanfar and J. Varah (1999). A stable numerical method for inverting shape from moments. SIAM Journal on Scientific Computating 21(4):1222–1243 (1999). [7] B. Gustafsson, M. Putinar, E. Saff and N. Stylianopoulos (2009). Bergman polynomials on an archipelago: estimates, zeros and shape reconstruction. Advances in Mathematics 222(4):1405– 1460. [8] A.S. Hadi (1994). A modification of a method for the detection of outliers in multivariate samples. Journal of the Royal Statistical Society. Series B (Methodological), 56(2):393-396. [9] J.W. Helton, J.B. Lasserre and M. Putinar (2008). Measures with zeros in the inverse of their moment matrix. The Annals of Probability, 36(4):1453-1471. [10] E.M. Knorr, R.T. Ng and R.H.Zamar (2001). Robust space transformations for distance-based operations. Proceedings of the international conference on Knowledge discovery and data mining (pp. 126-135). ACM. [11] J.B. Lasserre (2015). Level Sets and NonGaussian Integrals of Positively Homogeneous Functions. International Game Theory Review, 17(01):1540001. [12] J.B. Lasserre and M.Putinar (2015). Algebraic-exponential Data Recovery from Moments. Discrete & Computational Geometry, 54(4):993-1012. [13] M. Lichman (2013). UCI Machine Learning Repository, http://archive.ics.uci.edu/ml University of California, Irvine, School of Information and Computer Sciences. [14] J.J. Oliver, R.A.Baxter and C.S. Wallace (1996). Unsupervised learning using MML. Proceedings of the International Conference on Machine Learning (pp. 364-372). [15] W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery (2007). Numerical Recipes: The Art of Scientific. Computing (3rd Edition). Cambridge University Press. [16] G. Szegö (1974). Orthogonal polynomials. In Colloquium publications, AMS, (23), fourth edition. [17] V. Totik (2000). Asymptotics for Christoffel functions for general measures on the real line. Journal d’Analyse Mathématique, 81(1):283-303. [18] G. Williams, R. Baxter, H. He, S. Hawkins and L. Gu (2002). A Comparative Study of RNN for Outlier Detection in Data Mining. IEEE International Conference on Data Mining (p. 709). IEEE Computer Society. [19] K. Yamanishi, J.I. Takeuchi, G. Williams and P. Milne (2004). On-line unsupervised outlier detection using finite mixtures with discounting learning algorithms. Data Mining and Knowledge Discovery, 8(3):275-300. 9 | 2016 | 104 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.